url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
http://blog.complexminds.net/?p=159
# Economic Growth Models Defined as 1. An increase in real GDP occurring over some time period, or 2. An increase in real GDP per capita occurring over some time period. Economic growth is calculated as percentage rate of growth per quarter (3-month period) or per year. Example Real GDP in US 2006: \$ 12976.2 Billion 2007 \$ 13254.1  Billion US economic growth rate for 2007: $Latex formula$ Ok. So that’s just a fact.   Sometimes we have a higher growth, sometimes lower. Some countries have higher growth than other country at a particular time. So why do we need a model? Thinking with models, even simple models, can help us reach deeper understandings that enable us to better explain data, forecast the future with greater reliability, and to see the potential ramifications of decisions, whether those be individual choices or national policies. Our models, some simple models of economic growth will provide all three of those benefits. The models will explain growth (at least partly) show why growth may differ across countries and change over time, and point towards particular policy actions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5368334650993347, "perplexity": 3415.467617296029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592654.99/warc/CC-MAIN-20180721184238-20180721204238-00125.warc.gz"}
https://codereview.stackexchange.com/questions/64868/fetching-detail-for-a-given-id-which-might-not-exist
# Fetching detail for a given ID, which might not exist I'm actually working on a piece of code which use the following code: public Detail GetDetail(int id) { if (!_detail.ContainsKey(id)) { GetDetailForNewObjects(id); } return _detail[id].Data; } The last line throws a NullReferenceException. Since it should not happen that this method is called in the way it returns an exception they propose not to catch it and rethrow a more specific and descriptive one. I don't agree. The software readability and the maintainability of the cose falls dramatically when we get a NullReferenceException. How would you write that piece of code? • Is it possible that you actually need a line: _detail.Add(id, GetDetailForNewObjects(id)) Without knowing what the GetDetail... method does, it's hard to say. Oct 6, 2014 at 19:20 • Assert is handy when we know a thing will never happen, but I'm not sure I'd use it here. Jul 24, 2015 at 9:33 In this case, I would suggest that this exception should be classified as a 'Boneheaded' exception as defined by Eric Lippert because there is no reason why the code should throw the exception at all. Exceptions are expensive, as is handling them so I would make every effort to have code that only has them in TRULY exceptional circumstances. It seems to me like the reason the _detail[id] throws a NullReferenceException is because the GetDetailForNewObjects method had a problem which could be in many forms, but I'll outline two below. This problem could be one of • Your DB Query (assuming that is what is happening) couldn't find any records. Solution: return a null object. Don't try to access something you know has a reasonable possibility of being null, instead check again for null and return null if no object exists after the call. Let the calling code figure out what to do with a null (maybe search results are null, this is not an exception). Solution: Again, THIS is the exception that should be passed up. We shouldn't handle a DB exception (since there it typically nothing that can be done about it), just to throw a null reference exception. This behavior would make debugging more difficult since it hides the actual DB connection issue. Bubble the DB exception upwards so the code / developer can respond to that, instead of an ambiguous NullReferenceException because ultimately, we need to know WHY it was null since clearly we weren't expecting it to be. In both cases, we as developers know how to handle the situation (given the assumptions I laid out). I would write the above code something like this: public Detail GetDetail(int id) { if(id < 0) { throw new ArgumentException("id", "Argument should have a positive value."); } // end if Detail returnValue = null; // value store that is only assigned if the call succeeds if (!_detail.ContainsKey(id)) { GetDetailForNewObjects(id); // assuming this adds the key to the dictionary. if(_detail.ContainsKey(id)) { //Cheaper and safer than handling an exception returnValue = _detail[id].Data; } // end if } // end if return returnValue; } // end function GetDetail I feel in this case we should propagate a NullReferenceException as that does actually accurately describe the error that has occurred. You would gain little from wrapping this exception in another exception. The only exception (heh) in my opinion would be if you wanted to wrap this exception in a domain-specific exception (new MyLibraryException(ref)) but this is debatable even then. By using a null reference exception you clearly signal to the programmer that the line on which it failed had a null reference exception (in this case, it is obvious that _detail[id] is null - in the case of _detail[id] being out of bounds, we would throw an ArrayOutOfBoundsException).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3283284902572632, "perplexity": 1711.7899985508075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446708010.98/warc/CC-MAIN-20221126144448-20221126174448-00763.warc.gz"}
https://par.nsf.gov/search/award_ids:1743747
# Search for:All records Award ID contains: 1743747 Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval). What is a DOI Number? Some links on this page may take you to non-federal websites. Their policies may differ from this site. 1. Abstract The blazar J1924–2914 is a primary Event Horizon Telescope (EHT) calibrator for the Galactic center’s black hole Sagittarius A*. Here we present the first total and linearly polarized intensity images of this source obtained with the unprecedented 20 μ as resolution of the EHT. J1924–2914 is a very compact flat-spectrum radio source with strong optical variability and polarization. In April 2017 the source was observed quasi-simultaneously with the EHT (April 5–11), the Global Millimeter VLBI Array (April 3), and the Very Long Baseline Array (April 28), giving a novel view of the source at four observing frequencies, 230, 86,more » Free, publicly-accessible full text available August 1, 2023 2. The sparse interferometric coverage of the Event Horizon Telescope (EHT) poses a significant challenge for both reconstruction and model fitting of black-hole images. PRIMO is a new principal components analysis-based algorithm for image reconstruction that uses the results of high-fidelity general relativistic, magnetohydrodynamic simulations of low-luminosity accretion flows as a training set. This allows the reconstruction of images that are both consistent with the interferometric data and that live in the space of images that is spanned by the simulations. PRIMO follows Monty Carlo Markov Chains to fit a linear combination of principal components derived from an ensemble of simulatedmore » Free, publicly-accessible full text available August 1, 2023 3. ABSTRACT Intermediate-mass black holes (IMBHs, $10^{3\!-\!6} \, {\rm M_\odot }$), are typically found at the centre of dwarf galaxies and might be wandering, thus far undetected, in the Milky Way (MW). We use model spectra for advection-dominated accretion flows to compute the typical fluxes, in a range of frequencies spanning from radio to X-rays, emitted by a putative population of $10^5 \, {\rm M_\odot }$ IMBHs wandering in five realistic volume-weighted MW environments. We predict that $\sim 27{{\ \rm per\ cent}}$ of the wandering IMBHs can be detected in the X-ray with Chandra, $\sim 37{{\ \rm per\ cent}}$ in themore » Free, publicly-accessible full text available July 27, 2023 4. Free, publicly-accessible full text available July 1, 2023 5. ABSTRACT Numerical general relativistic radiative magnetohydrodynamic simulations of accretion discs around a stellar-mass black hole with a luminosity above 0.5 of the Eddington value reveal their stratified, elevated vertical structure. We refer to these thermally stable numerical solutions as puffy discs. Above a dense and geometrically thin core of dimensionless thickness h/r ∼ 0.1, crudely resembling a classic thin accretion disc, a puffed-up, geometrically thick layer of lower density is formed. This puffy layer corresponds to h/r ∼ 1.0, with a very limited dependence of the dimensionless thickness on the mass accretion rate. We discuss the observational properties of puffymore » Free, publicly-accessible full text available June 9, 2023 6. Abstract State transitions in black hole X-ray binaries are likely caused by gas evaporation from a thin accretion disk into a hot corona. We present a height-integrated version of this process, which is suitable for analytical and numerical studies. With radius r scaled to Schwarzschild units and coronal mass accretion rate m ̇ c to Eddington units, the results of the model are independent of black hole mass. State transitions should thus be similar in X-ray binaries and an active galactic nucleus. The corona solution consists of two power-law segments separated at a break radius r b ∼ 10 3more » Free, publicly-accessible full text available June 1, 2023 7. Abstract The extraordinary physical resolution afforded by the Event Horizon Telescope has opened a window onto the astrophysical phenomena unfolding on horizon scales in two known black holes, M87 * and Sgr A*. However, with this leap in resolution has come a new set of practical complications. Sgr A* exhibits intraday variability that violates the assumptions underlying Earth aperture synthesis, limiting traditional image reconstruction methods to short timescales and data sets with very sparse ( u , v ) coverage. We present a new set of tools to detect and mitigate this variability. We develop a data-driven, model-agnostic procedure tomore » Free, publicly-accessible full text available May 1, 2023 8. Abstract Recent developments in very long baseline interferometry (VLBI) have made it possible for the Event Horizon Telescope (EHT) to resolve the innermost accretion flows of the largest supermassive black holes on the sky. The sparse nature of the EHT’s ( u , v )-coverage presents a challenge when attempting to resolve highly time-variable sources. We demonstrate that the changing ( u , v )-coverage of the EHT can contain regions of time over the course of a single observation that facilitate dynamical imaging. These optimal time regions typically have projected baseline distributions that are approximately angularly isotropic and radiallymore » Free, publicly-accessible full text available May 1, 2023 9. Abstract We present the first Event Horizon Telescope (EHT) observations of Sagittarius A* (Sgr A*), the Galactic center source associated with a supermassive black hole. These observations were conducted in 2017 using a global interferometric array of eight telescopes operating at a wavelength of λ = 1.3 mm. The EHT data resolve a compact emission region with intrahour variability. A variety of imaging and modeling analyses all support an image that is dominated by a bright, thick ring with a diameter of 51.8 ± 2.3 μ as (68% credible interval). The ring has modest azimuthal brightness asymmetry and a comparativelymore » Free, publicly-accessible full text available May 1, 2023 10. Abstract We present Event Horizon Telescope (EHT) 1.3 mm measurements of the radio source located at the position of the supermassive black hole Sagittarius A* (Sgr A*), collected during the 2017 April 5–11 campaign. The observations were carried out with eight facilities at six locations across the globe. Novel calibration methods are employed to account for Sgr A*'s flux variability. The majority of the 1.3 mm emission arises from horizon scales, where intrinsic structural source variability is detected on timescales of minutes to hours. The effects of interstellar scattering on the image and its variability are found to be subdominantmore » Free, publicly-accessible full text available May 1, 2023
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6339240074157715, "perplexity": 1936.371177446548}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00565.warc.gz"}
http://caps.fool.com/blogs/dont-pay-too-much-attention/215674
# bigpeach (31.06) ## bigpeach's CAPS Blog Recs ### 36 June 19, 2009 – Comments (56) Yesterday I was perusing the blogs and came across a comment I've heard too many times. An individual was citing another individual's CAPS score and accuracy, implying that the values in some way lent credibility to the opinions they were voicing. It's quite distressing to read things like this, not because I care what people do on CAPS, but because I worry that people may make decisions with their hard earned money based only on recommendations from Top Fools. I'm going to explore in detail the accuracy component of the CAPS ranking, but first a note on score. Browse through the profiles of the leader board and you'll see two tickers on almost every single one. GMGMQ.PK (yes, that's GM equity), and PXCE.OB. It is near certain that GM equity is worthless, and PXCE is a shell of a company that has most likely had it's price manipulated by "pump and dumpers." So, to rack up points, you simply red thumb these (in some cases more than once) and it's an easy 100 points plus an additional "correct call." This sort of thing is clearly not actionable in practice and so is irrelevant from an investing stand point, yet in some people's mind, the points it generates lends credibility to one's arguments. More important than a person's score is how they have achieved such a score. Have they made picks that are actionable in practice, and supported their pick with thoughtful commentary, or have they gamed the system? Accuracy Accuracy is critically important to CAPS scores, but is it really important? I'll often hear comments like "individual X has been right on Y% of their calls, therefore they should be listened to." Oh really? I will submit that it is not at all relevant, and in fact, a computer could beat anybody's accuracy. Let's explore. It is well known (well enough that I'm not going to source this statement) that stock prices typically follow a random walk about some longer term trend. With a margin of "success" of only 5%, how well would you do if you simply made picks at random and closed them once the score exceeded 5? To do this we simply need to run a random walk simulator and compare the results to what human players have done. Methodology: Lognormal daily stock prices projected for 252 days, or approximately one year (not attempting to model intra-day). The price is not in absolute terms, but in terms of price relative to the S&P 500, so it begins at 1, and a price of 1.05 gives a success on an outperform pick. The pick, either outperform or underperform is chosen at random (this part isn't actually relevant). Volatility is fixed throughout the projection. I projected 10,000 picks, and closed each as soon as it generated a score greater than 5. No pick is closed with a negative score. Esentially, I'm trying to replicate with a simulator the kind of strategy most of the top CAPS players use. Clearly the volatility assumption is critical. The higher the volatility, the more likely it will at some point be successful. So what is our assumption? I looked at three widely followed companies, and took a standard deviation of the difference in their daily returns and that of the S&P 500. Since prices in our model are relative, so too should volatility. The period of time is 1st quarter 2009. BAC - 11.4% XOM - 1.3% AAPL - 1.9% Results A 2% volatility during this period seemed like the most reasonable assumption to me for a real life (not CAPS) portfolio. Clearly in CAPS you can take on more risk than you would with your brokerage account. The results may surprise some people. Fully 85% of the picks were recorded as a success. That means, if you simply picked stocks at random, at some point, 85% of them would have beaten the S&P by 5%. Increase volatility to 5%, and the success ratio is 92%. If we use BAC's volatility, which in truth is the kind of pick a lot of CAPS players have been using over the past 9 months, (think triple levered ETFS like FAZ and FAS) the success ratio is a whopping 95%. So, in actuality, CAPS players, even the best, are underachieving the accuracy a computer could generate. Chalk it up to the picks they actually believe in and hold for a longer period of time. I'll conclude by saying what I think everyone already knows, but perhaps at times forgets. When it comes to investing, it's not what you do, but why and how you do it. The Fool with a rating of 99 point something, score over 10K, 85% accuracy and average score of 5 is doing nothing more than playing the CAPS game, and should not be taken any more seriously than the Fool with a score <20. The strength of a person's research and argument is what's really important, and is the true value of this site. #1) On June 19, 2009 at 1:57 PM, jddubya (< 20) wrote: Good post - it is so childish when one member calls anothers score a way to rate knowledge/investing/success. You can't judge a book by it's cover.  Or a movie by it's previews. On a side note - "There Will Be Blood" was one of the worst movies I'd ever seen relative to the previews for it.  Spiderman 3 takes a close second. Report this comment #2) On June 19, 2009 at 1:57 PM, portefeuille (99.63) wrote: great post! you are right that anyone could find this easily, so I just add the link to the relevant wikipedia entry on random walk. Report this comment #3) On June 19, 2009 at 2:07 PM, anchak (99.87) wrote: Bigpeach.....You get the rec! but not straight the nod..... I have seen Hans also mentioned and echoed your argument..... However, the formulation is what drives your conclusion - let me suggest an alternative....... Here's my latest blog: Specifically see a very nice paper posted by Tasty over there Comment # 11. There they are fitting a GARCH-M null model to the indices. Which means the main point there is Volatility by itself is serialy-correlated and is a stochastic ie time varying entity. Also given the fact that CAPS scores accuracy as an Index delta - you are dealing with 2 separate Volatility ie sigma which are time varying ( one for the index and the other from the stock). The basic premise which you bring to the table is obvious - that CAPS accuracy should not be treated as a Coin-Toss ie 50/50 random basis - the threshold for being Non-random successful in picks is possibly different - especially if Picks are held over a long enough period. If you can use a correct enough premise - the results of the simulation would be constructive - and it would differ period-to-period. Report this comment #4) On June 19, 2009 at 2:08 PM, anchak (99.87) wrote: Hans....how come you are not researching this a little more deeply.....I am surprised Report this comment #5) On June 19, 2009 at 2:20 PM, RonChapmanJr (31.96) wrote: You may have been reading one of my comments.  I have said a couple of times recently that I think GoodVibe's score is showing and will continue to support the idea that EWP is nonsense. However, with this post I agree and disagree.  While a high ranking may be meaningless it is not the same as a <20 score.  Players who have a really low score and accuracy and have been participating on CAPS for a long time (6+ months) either are purposely trying to do a poor job or just suck at investing.  I don't care how well someone analyzes a company, if they never make money (or CAPS points) doing so they should be ignored. ron Report this comment #6) On June 19, 2009 at 2:21 PM, portefeuille (99.63) wrote: Hans....how come you are not researching this a little more deeply.....I am surprised Because I thought someone would do it for me. And bigpeach said he was working on it. I might join in. I guess a slightly different approach that might also have a good "shot" at convincing the "critics" would be to take the picks made by say the current top 100 players and use the historic data (closing prices). The only problem would be the 200 pick limit. But I guess doing it without that limit would not be too much of a problem. So just use the calls that where actually made and use the exact starting day and price, but ignore the "ending of calls", let the computer do the ending just like bigpeach did. I hope people get my point, I could elaborate. I think foolsmethrice is also "well equipped" to do that simulation (see comment #30 here). Report this comment #7) On June 19, 2009 at 2:26 PM, portefeuille (99.63) wrote: ... take the picks made by say the current top 100 players ... Not throwing them together of course. That would be our "empirical universe" so to say. A separate simulation would be done for each player. Report this comment #8) On June 19, 2009 at 2:29 PM, portefeuille (99.63) wrote: I know some Monte Carlo simulation guys who could do that overnight, I guess. I am more of a pencil and paper theoretician ... Report this comment #9) On June 19, 2009 at 2:32 PM, portefeuille (99.63) wrote: I am more of a pencil and paper theoretician ... I guess Jim Simons is better off without me (see this post). I would screw up the first day. Report this comment #10) On June 19, 2009 at 2:35 PM, awallejr (72.32) wrote: Here's a question I have asked several times.  Perhaps someone here will try.  If I picked a stock say at $15. This stock pays$3 per year "dividend" (alot of mlps do but while we call them dividends they are technically a return of capital).  5 years and 3 months go by. I received $15.75 in dividends. The stock is selling for say$ 10 and the S&P has gone up say 100%.  How many points would I make off the pick? Report this comment #11) On June 19, 2009 at 2:39 PM, bigpeach (31.06) wrote: Yeah, I thought you might appreciate the post porte. You know, now that I finally did what I said I was going to do. Thanks for the comments anchak. For those who may not be familiar with asset pricing; the lognormal model is a standard model for prices, and is what is used in Black-Scholes option pricing. The GARCH-M is probably more commonly used in financial modeling today. It includes error terms that are a function of the prior period error with a stochastic (random) component. It attempts to model prolonged periods of high or low volatility, as well as uncertainty. I believe it also includes price jumps (the gamma term) but I'm not positive about that. In any case, I will respond by saying I'm trying to prove a point, rather than provide accurate numbers. A more accurate model would not provide meaningfully different results. Accuracy of 80% plus can be gamed, and should not be a factor in asessing an individual's credibility. Report this comment #12) On June 19, 2009 at 2:39 PM, bigpeach (31.06) wrote: Yeah, I thought you might appreciate the post porte. You know, now that I finally did what I said I was going to do. Thanks for the comments anchak. For those who may not be familiar with asset pricing; the lognormal model is a standard model for prices, and is what is used in Black-Scholes option pricing. The GARCH-M is probably more commonly used in financial modeling today. It includes error terms that are a function of the prior period error with a stochastic (random) component. It attempts to model prolonged periods of high or low volatility, as well as uncertainty. I believe it also includes price jumps (the gamma term) but I'm not positive about that. In any case, I will respond by saying I'm trying to prove a point, rather than provide accurate numbers. A more accurate model would not provide meaningfully different results. Accuracy of 80% plus can be gamed, and should not be a factor in asessing an individual's credibility. Report this comment #13) On June 19, 2009 at 2:40 PM, checklist34 (99.64) wrote: peachie, you are one of CAPs best bloggers, and this is another good post.  I never did get to read your thesis on GNW because I think you went on vacation before posting it. Average gains per pick is by far something the CAPs score should include, and I don't know how i feel about double shorting something to gain more points.  you could add to a short position all the way down and make more money than the value of your initial short...  but if on CAPs if you short a 2nd time and lose you lose, the losses aren't compounded like they would be in real life if you kept adding. And to mirror real life results, the CAPs game has to cosnider gains per pick much more than it does.  gaining 5 points average on 2000 picks is not likely to be very profitable in real life unless you have a large amount of money as transaction costs will kill you. And lastly, the making-many-picks-is-one-of-the-real-keys-to-a-good-score thing is silly. If we met a CAPs member who's portfolio had a green thumb for ASH from February 28th... and thats it. He'd have a score of 300 and probably a rank of 55 or something.  But in real life he'd be 4x his money + Report this comment #14) On June 19, 2009 at 2:45 PM, bigpeach (31.06) wrote: Oh crap, another person calling me out for something I said I'd write. Well I better get on that one too. Although it's up about 3X from the time I said I'd write it so I'll be a little late with it. Report this comment #15) On June 19, 2009 at 2:48 PM, portefeuille (99.63) wrote: The only problem would be the 200 pick limit. Actually that should not be a problem because most of the "top100 players" do not end picks that have scored less than +5 score points, so the computer should have a smaller number of "active picks" for most of the time (hehe ...). In any case, I will respond by saying I'm trying to prove a point, rather than provide accurate numbers. A more accurate model would not provide meaningfully different results. Accuracy of 80% plus can be gamed, and should not be a factor in asessing an individual's credibility. I agree and there should be no need for more simulations and analysis to prove that point. Thank you again! Report this comment #16) On June 19, 2009 at 2:52 PM, anchak (99.87) wrote: awallejr: Man! Such tricks......TMF adjusts your basis automatically - didn't you know that? Bigpeach: Now you get the Favorite also with your comment! This is a very difficult problem my friend. You understand that right - I think one easier approximation is to look at say +/- 30 days around Inflection points and look at contrarian accuracy. Basically what I am saying is once the main series has a trend - with increasing volatility - and a pick in line of the trend - has a much more conditional chance of success. Report this comment #17) On June 19, 2009 at 3:04 PM, Melaschasm (69.53) wrote: The accuracy rating in CAPS is a joke. However, I think the raw score has some value, since it requires beating the S&P in total. For example if I green thumb and close BAC every time it gains five points, after 20 closes I will have a score of 100.  If there was not a delay between my close and new green thumb, had I simply held my green thumb until I gained 100 points, I would have closed the pick at the same time as my 20th close.  In both situations I have 'earned' 100 points, but in one situation I have wasted a bunch of time to boost my total accuracy. *I realize my example ignores the ability to use the CAPS 20 minute delay to further game results, but I consider that a different issue. Report this comment #18) On June 19, 2009 at 3:11 PM, portefeuille (99.63) wrote: *I realize my example ignores the ability to use the CAPS 20 minute delay to further game results, but I consider that a different issue. Oh, I had almost forgotten about that one (see comment #10 here). So we can summarise. At least 85% accuracy and at least 3% average pick score from the 20 minute delay thing. Mr. computer makes the top100 easily within a few weeks using some 2 or 3 random walk players. Report this comment #19) On June 19, 2009 at 3:18 PM, portefeuille (99.63) wrote: at least 3% average pick score from the 20 minute delay thing. Make that some 6%. He should use high volatility picks (as bigpeach has laid out) and with those a 4% gain (you have "free choice" here) for the start and another 2% (no "free choice" here) at the end of the pick should be no problem misusing the 20 minute delay backdoor. Report this comment #20) On June 19, 2009 at 3:23 PM, portefeuille (99.63) wrote: (also see comment #11 here for that 20 minute delay thing.) Report this comment #21) On June 19, 2009 at 3:31 PM, portefeuille (99.63) wrote: ((I would actually like to see who uses that 20 minute thing. If someone finds the time it would be interesting to see who always "gets in/out" at the intraday low and out/in at the intraday high for his "outperform"/"underperform calls ... sorry for the interruptions)) Report this comment #22) On June 19, 2009 at 3:32 PM, anchak (99.87) wrote: I regret to inform you that you have not understood how CAPS works..... Lets say you picked BAC at 5 and I think it almost touched 15. So 300% gain....... So according to your computation that would provide - 300/5 = ie 60 opportunities to pick and score the 60x5 = 300 points in CAPS! There's a simple thing in finance called compounding. If you just closed and repicked this stock at every 2.50 interval here's what it would look in RAW CAPS points ( without adusting for S&P) 7.5-5/5 = +50. 10-7.5/7.5=33.33 12.5-10/10 = 25 15-12.5/12.5= 20 Total points = 50+33.33+25+20= 133.33 And you wonder why UltraLong is #2 or something! So yes this would be +3 ( as compared to 1) in accuracy - but almost 200 points less! Never close a Green thumb in CAPS on which you have conviction! Report this comment #23) On June 19, 2009 at 3:41 PM, truthisntstupid (94.22) wrote: No one with any sense would read comments from many blogs like this and ever again care the least about what any CAPS player had to say about anything - especially where real money is concerned. Report this comment #24) On June 19, 2009 at 3:42 PM, truthisntstupid (94.22) wrote: No one with any sense would read comments from many blogs like this and ever again care the least about what any CAPS player had to say about anything - especially where real money is concerned. Report this comment #25) On June 19, 2009 at 4:16 PM, TMFBabo (100.00) wrote: anchak: in your example, that's a 200% gain for BAC from 5 to 15. Report this comment #26) On June 19, 2009 at 4:18 PM, checklist34 (99.64) wrote: did I read above that TMF adjusts our basis in the CAPs game for dividends? thats itneresting, because in real life right now ihave a negative cost basis on ACAS after the $1.07 divi, lol. So i'd have like 50 cents cost basis on the CAPs game. I don't think its showing up that way but I haven't looked. Dividends, taxes, and transaction costs are 3 things that are a very big deal in real life but AFAI have ever understood, discounted on the CAPs game. Are dividends counted? Report this comment #27) On June 19, 2009 at 4:26 PM, anchak (99.87) wrote: bullishbabo : Oops! you are right of course...... So it is really 200 points as compared to 133 in the example. They absolutely account for divs AND Cap gains/Distbns too for ETFs etc. Report this comment #28) On June 19, 2009 at 4:48 PM, awallejr (72.32) wrote: awallejr: Man! Such tricks......TMF adjusts your basis automatically - didn't you know that? Anchak, yes I know that. So given my example how many points would I gain? I really am trying to make a point here. If my start price was$15, I accrued 15.75 in dividends, my start price would adjust to negative.  How can you possibly determine a pct gain then?  I think the game is broken, unless I am missing something here. Report this comment #29) On June 19, 2009 at 4:57 PM, awallejr (72.32) wrote: Or if CAPS doesn't ever drop your start price to negative but limits it to 0, again how can you compute the points off a basically infinite pct gain comparison to the S&P. Report this comment #30) On June 19, 2009 at 5:03 PM, TMFCrocoStimpy (94.57) wrote: awallejr, Cost basis doesn't adjust by subtraction when a dividend is paid out.  The industry standard, which CAPS adheres to, is to treat a dividend as though it were re-invested into the stock on the ex-div date, at the price of the stock on that day.  The upshot is that on the ex-div date, you calculate a multiplier for your cost basis that is: 1/(1+div/share price) This reduces your effective cost basis, but can never go to zero (or negative). So, to answer your question, you would need to know the cost of your security at each date that the different dividends were paid out, and then multiply each of those correction factors against your original cost basis to come up with your current cost basis. -Stimpy Report this comment #31) On June 19, 2009 at 5:49 PM, portefeuille (99.63) wrote: 1/(1+div/share price) The formula should be a -> a * (1 - d/b), where a is the old cost basis, d is the dividend and b is the price before the dividend is paid, so the "factor" is (1 - d/b). Report this comment #32) On June 19, 2009 at 5:54 PM, portefeuille (99.63) wrote: With this x = (1 - d/b) you have (b - a)/a = (b - d - xa)/xa. Another way to arrive at this: Before the dividend you have n shares with a cost basis of a/share. You get nd/(b - d) new shares ("dividends reinvested"). So the new cost basis is na/(n + nd/(b - d)) = a * (1 - d/b). Report this comment #33) On June 19, 2009 at 6:05 PM, portefeuille (99.63) wrote: #30-32 For small dividends (small d/b) the "error" is small because for small c (= d/b) you have 1/(1 + c) = ca. (1 - c). Report this comment #34) On June 19, 2009 at 6:06 PM, topsecret09 (65.00) wrote: Point  well made,and well taken...... I agree with you 100 % Report this comment #35) On June 19, 2009 at 6:49 PM, TMFCrocoStimpy (94.57) wrote: Porte, Industry often uses the approx. of (1-div/price) instead of the full expression, but that can be problematic.  As you've noted, this approx. is only valid for small values of (div/pric).  Since one-time payouts (which can lead to larger values of div/price) are handled in the same way as regular dividends, the full expression for basis reduction is used to avoid errors from the higher order terms.  Though this does not happen often, handling the dividend properly from the get go avoids having to make case-by-case adjustments for large one-time payouts. -Stimpy Report this comment #36) On June 19, 2009 at 6:52 PM, automaticaev (< 20) wrote: when i chose tickers i never delete them or sell them i just leave it so... Report this comment #37) On June 19, 2009 at 7:01 PM, portefeuille (99.63) wrote: Industry often uses the approx. of (1-div/price) instead of the full expression, but that can be problematic. Actually I do not think that 1/(1 + d/b) is "easier" to use than (1 - d/b). Why would anyone use that "approximation"? Is the "correct formula" used in the "caps" game? Report this comment #38) On June 19, 2009 at 7:14 PM, TMFCrocoStimpy (94.57) wrote: Porte, The "correct formula" is used in CAPS.  The "easier" part is in reference to the implementation and operations standpoint.  Because we use the exact formulation, we can run any size dividend through the same calculation.  If we did not do it that way, then you would have to run an inequality test against (div/share) for each dividend to determine if you should use an approximation or switch and use the full formulation.  Cleaner (i.e., easier) to code and maintain one routine rather than two as a general rule, plus the added inequality and branching logic require far more cycles to process than the extra DIV operation in the full formula. As to why the (1-div/share) approx. gets used in the industry at all, I don't have a definitive answer for that.  I like to think that it is legacy from the days before spreadsheets, and simply got engrained in the way things are done, but that is just a guess. -Stimpy Report this comment #39) On June 19, 2009 at 7:26 PM, portefeuille (99.63) wrote: #38 That was confusing. The factor of (1 - d/b) is not an approximation (see comment #32 above). The wrong formula is the one you mentioned in comment #30 above. Which one is used in the "caps" game? And the calculation of the correct factor involves just 1 division and 1 subtraction, so it should be "easy to implement". Report this comment #40) On June 19, 2009 at 7:43 PM, TMFCrocoStimpy (94.57) wrote: Porte, Sorry, thought you had corrected yourself in #33.  The formulation in #32 is incorrect, and (1-div/price) is the approximation to the correct formula, not vise-versa.  Before you immediately jump to say that #32 is formulated correctly, perform the simple sniff test on your answer:  If a dividend equal to the share price is paid out, then your formula would indicate that the cost basis is now 0, so the return on the original investment would be infinite.  Suggest you take a look at the problem again. Cheers, -Stimpy Report this comment #41) On June 19, 2009 at 8:11 PM, portefeuille (99.63) wrote: If a dividend equal to the share price is paid out, then your formula would indicate that the cost basis is now 0, so the return on the original investment would be infinite. The factor given in comment #32 above is the correct one. After a dividend of d = b the price would drop to zero and you could buy nd/(b - d) = \infty new shares, giving you a cost basis of zero, just the way it should be. That is the correct limit. The "return on the original investment would" not be "infinite", because the price would have gone from 0 to 0. So the return is 0. Please have a look at comment #32 above. With this x = (1 - d/b) you have (b - a)/a = (b - d - xa)/xa and that is just the way it is supposed to be. The lhs is the "return" before the dividend payment. The rhs is the "return" after the dividend payment. Another way to arrive at this: Before the dividend you have n shares with a cost basis of a/share. You get nd/(b - d) new shares ("dividends reinvested"). So the new cost basis is na/(n + nd/(b - d)) = a * (1 - d/b). This should really not be that hard to see ... Report this comment #42) On June 19, 2009 at 8:13 PM, ARJTurgot (59.14) wrote: I'd take it a step further and suggest that you not take anyone's posts here seriously.  The only thing I see the posts on Fool proving is that an infinite number of monkeys on an infinite number of keyboards doesn't, in fact, result in the creation of Hamlet. Report this comment #43) On June 19, 2009 at 8:17 PM, portefeuille (99.63) wrote: To the readers. I apologise, this has nothing to do with this great post. It all started with comment #10 above. But by now I really have the feeling that they might not treat dividend payments the way they should here in the "caps" game. Report this comment #44) On June 19, 2009 at 8:27 PM, portefeuille (99.63) wrote: maybe a simple example: stock goes from 1 to 5 (+400%), pays dividend of 2. 1 - 2/5 = 0.6 new starting price is 1 * 0.6 = 0.6 0.6 to 3 is again (+400%). Maybe I have a wrong understanding of cost basis. I was talking about the adjusted starting price. Report this comment #45) On June 19, 2009 at 8:59 PM, portefeuille (99.63) wrote: I have found the error. You talk about the "post div price" b - d. 1/ (1 + d/(b - d)) = 1 - d/b Report this comment #46) On June 19, 2009 at 9:00 PM, portefeuille (99.63) wrote: so no problem, you should have specified whatshare price is meant in 1/(1+div/share price) though. so "problem" solved, we are both right. have a nice week end. Report this comment #47) On June 19, 2009 at 9:07 PM, portefeuille (99.63) wrote: To see that they use the post dividend share price have a look at this: -------------------------- Okay, exactly how are we adjusting? The exact formula is New Start Price = Old Start Price * Reduction Factor Where Reduction Factor = 1 / (1 + Dividend / Post-div Price) -------------------------- (from here) Report this comment #48) On June 19, 2009 at 9:16 PM, TMFCrocoStimpy (94.57) wrote: Porte, I see where this is getting fouled up - we're using different values of "ticker price" in the formulation.  For your formulation, (1-d/b), you are using b==price prior to ex-div adjustment.  Re-arrange so that "price" is the post ex-div adjustment value and you come around to the original formulation.  Terminology is key.  Post ex-div adjustment is what was meant by "...at the price of the stock on that day" in #30, since the price opens with the ex-div date adjustment. So why the heck would you do it that way you ask?  In practice, it has been considered easier to use the post ex-div price because it corresponds to the opening price on the ex-div date, and that is the data that comes in through the datafeeds and is also the price at which the new shares are purchased. However, as has been said, this is a bit of a deviation from the main post, which I'd like to comment on a bit.   As TMFJake has mentioned in a few places, we've been examining the accuracy variable considerably ever since the launch of CAPS.  The random walk aspect of the measure has grown considerably over the past year+ with the increased volatility of the market, given the fixed threshold that is currently used.  A variety of strategies including variance adjusted thresholds (rather than the fixed +5%) and time weighted/time decay functions of accuracy are being re-examined, though whether they will progress beyond being retested I can't say. BigPeach, I would suggest that you expand your simulations to include the 200 pick limitation constraint, and then generate a population of portfolios that evolve over a 1 or 2 year period.  Anytime you have a pick >+5 close it, and then restart a new pick in its place.  Track both the cumulative score and accuracy of the portfolios as they evolve over time.  If you have the tools, examine the distribution of accuracy v. score. v population at a variety of time slices.  I think you'll find that the pick number constraints have some fairly interesting effects on the mid to long term composition of the portfolio accuracy and score.  Another added feature of the simulation is that you want to work underperform picks into the scenario, which can have losses in excess of 100%, unless you want to ignore accuracy harvesting from underperforms (which seems to be one of the primary points people bring up when examining the whole accuracy question). The only purpose in suggesting this excercise is that you seem to be quite interested in the problem, and I'd like you to get a feel for how rich the landscape is from just the addition of a few more components to the simulation (pick number and underperforms) in terms of CAPS portfolios that would be generated at random.  I concur that with the current market volatility, there is to much opportunity to use volatility to harvest accuracy than I would like the system to have.  However, and I stress this each time the accuracy discussion comes up, the number of players who fail miserably in trying to use this strategy is substantial, and few people tend to see that.  I would be quite interested in seeing what proportion of a random population you come up with that can maintain a high accuracy and high score using the pick limits and underperforms. -Stimpy Report this comment #49) On June 19, 2009 at 9:18 PM, TMFCrocoStimpy (94.57) wrote: Porte, Dollar short and a few minutes late on my last post.  Glad we both caught the discrepancy.  Have a good weekend yourself - hope to chat more. -Stimpy PS - BigPeach, apologies if this got to far off-topic Report this comment #50) On June 19, 2009 at 9:27 PM, TMFUltraLong (99.90) wrote: Nice.... so behold my 98.36% accuracy, take that Mr. Computer!!! lol UltraLong Report this comment #51) On June 19, 2009 at 9:43 PM, TimothyVR (< 20) wrote: *shrug* I don't care at all about the battle of the scores. I'm too much of a newbie to grasp the convoluted details. There's a certain amount of "in house" bickering and competition that goes with the territory. But I have found a lot more than that. The CAPS site is not just a source for ratings. Every stock has plenty of specific information and commentary that are helpful and clear. And yes, that DATA does help me make decisions about what to do with my had earned money. I prefer to find out as much as I can about a stock before I even think of buying and the CAPS discussions are very informative. Report this comment #52) On June 19, 2009 at 10:44 PM, Tastylunch (29.38) wrote: fantastic post BigPeach! well except for this part This sort of thing is clearly not actionable in practice and so is irrelevant from an investing stand point, A lot of .Pk and .ob stocks ar shortable if you know where to look. It's all about having the right brokers ;) It  is significantly harder to do now that it was 15-18 months ago unfortunately. Still the post was very awesome indeed! I think your criticisms are fair regarding CAPS. I think ti woud eb an issue if the objective of CAPS was ot find he betsinvestors but since it's not I guess it really doesn't matter other than to the poeple who dogpile Allstars with real money The Fool with a rating of 99 point something, score over 10K, 85% accuracy and average score of 5 is doing nothing more than playing the CAPS game, and should not be taken any more seriously than the Fool with a score <20. The strength of a person's research and argument is what's really important, and is the true value of this site. That may be one of the wisest things I've read in CAPS and I completely agree. Report this comment #53) On June 20, 2009 at 12:21 AM, rexlove (99.59) wrote: Right on bigpeach. I'm tired of reading blogs from these CAPS players with high scores thinking they're the best thing to come along since Warren Buffet. Let's see some of these players make some REAL MONEY! Report this comment #54) On June 20, 2009 at 1:26 AM, Chromantix (97.52) wrote: I have a higher CAPS rating than you, so I'm smarter, better looking, and more accurate! (just kidding, +1) Report this comment #55) On June 20, 2009 at 7:40 AM, TMFBabo (100.00) wrote: However, with this post I agree and disagree.  While a high ranking may be meaningless it is not the same as a <20 score.  Players who have a really low score and accuracy and have been participating on CAPS for a long time (6+ months) either are purposely trying to do a poor job or just suck at investing.  I don't care how well someone analyzes a company, if they never make money (or CAPS points) doing so they should be ignored. I want to echo this comment from Ron.  I commented somewhere that it is quite possible for players with very good CAPS ratings to be bad investors in real life; that's if all 200 of their picks are for "gaming the system."  I must also add that it is pretty much impossible for players with bad CAPS ratings to be very good investors in real life.  While a player with a good rating can still possibly be a good investor in real life, a bad rating precludes that possibility. As long as players do not game the system, I believe that score is quite useful.  It should somewhat correlate to meaningful investing skill. A lot of .Pk and .ob stocks ar shortable if you know where to look. It's all about having the right brokers ;) It  is significantly harder to do now that it was 15-18 months ago unfortunately. I agree with this from Tasty.  I believe EverydayInvestor was able to find .OB and .PK stocks to short in real life, and he made some good money doing so.  Maybe Tasty or someone can confirm this? I just thought I remembered reading that on one of Everyday's old blogs. Report this comment #56) On September 30, 2009 at 11:10 AM, OklaBoston (62.62) wrote: As someone who had an All-Star rating around here back in November, and then watched said rating fall through the floor as a result of a lousy stretch starting in January, I'm grateful for your implication that 20< ratings such as my current one shouldn't be ignored. Let's hope my current emphasis on earnings surprises, combined with paying less attention to stockcharts (while not ignoring them totally) keeps helping get my rating back up where it used to be. It has been doing so for about a  month now, for what that's worth. Report this comment ### Blog Archive ##### 2009 September (2) July (1) June (2) May (2)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5793839693069458, "perplexity": 1739.1725296263537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645167576.40/warc/CC-MAIN-20150827031247-00247-ip-10-171-96-226.ec2.internal.warc.gz"}
https://crypto.stackexchange.com/questions/37743/when-would-i-need-dsa-as-opposed-to-rsa-for-digital-signature
# When would I need DSA as opposed to RSA for digital signature? The digital signature algorithm encrypts a hash using the senders private key and the receiver's public key. This multiple encryptions is a pretty expensive process since public key encryption is so resource intensive. On the other hand, the RSA-based digital signature algorithm encrypts the hash using only the senders private key, which is a much cheaper operation. Under what circumstances would I need to use the expensive DSA as opposed to using the much faster RSA? • (EC)DSA doesn't use the receivers public key, in fact the receiver need not even have a key pair, all they need are the senders public key to verify the signature. – puzzlepalace Jul 14 '16 at 19:19 • ECDSA (DSA over elliptic curves) actually is faster than RSA. – SEJPM Jul 14 '16 at 19:19 • Isnt (EC)DSA different in operation from the more traditional DSA? – Minaj Jul 14 '16 at 19:20 • No, the basic signature / verification algorithms for both are the same, the differences are in what the group operations are (e.g. modular exponentiation vs. point multiplication) since DSA is over a finite field and ECDSA is over an elliptic curve. – puzzlepalace Jul 14 '16 at 19:24 • DSA signatures are shorter, at a 128 bit security level you have 512 bit signatures, instead of about 3000 bits with RSA. – CodesInChaos Jul 14 '16 at 19:28 The digital signature algorithm encrypts a hash using the senders private key and the receiver's public key. Huh? I see two problems with the above statement; • "Encryption"; using the word encryption implies that there's a way somehow to decrypt it. However, there's no way to anyone, even with the private key, to "decrypt" a signature to generate the hash. • "using ... the receiver's public key"; the signature operation most certainly does use the signer's private key, but it doesn't use any key (public or private) from the receiver. On the other hand the RSA-based digital signature algorithm encrypts the hash using only the senders private key, which is a much cheaper operation. Not necessarily; if we're talking about the standard DSA operation, the expensive operation is a modular exponentiation of the generator over a (perhaps) 256 bit random exponent modulo the prime; for RSA, the expensive operation (I'm assuming CRT) is two modular exponentiations of arbitrary values modulo primes half in length of the key size. I believe that you can implement the DSA operation (which, using precomputed tables, would take perhaps 50 multiplications modulo a 2048 bit prime) considerably faster than you can the RSA operation (which might take perhaps 2000+ multiplications modulo 1024 bit primes) In addition, with DSA, you can potentially perform this computation before you learn the value you're signing; if you can do this in time where you would otherwise be idle, you can make the signing operation very cheap indeed. And, if we're talking about DSA over Elliptic Curves, that is, ECDSA, the balance tilts even more radically towards ECDSA. Where RSA shines is the signature verification operation; there's, it's considerably faster than (EC)DSA. Under what circumstances would I need to use the expensive DSA as opposed to using the much faster RSA? Quite apart from your mischaracterization of DSA as expensive and RSA as cheap, actually, straight DSA is fairly rarely used in practice. ECDSA (and relatives, such as EdDSA) are becoming more popular, in part because it is cheaper... • "However, there's no way to anyone, even with the private key, to "decrypt" a signature to generate the hash...." Did you use this statement strictly in reference to DSA? – Minaj Jul 14 '16 at 19:57 • "Where RSA shines is the signature verification operation; there's, it's considerably faster than (EC)DSA." Why exactly is signature verification fast? small exponent? – Minaj Jul 14 '16 at 19:58 • @Minaj: "However, there's no way to anyone, even with the private key, to "decrypt" a signature to generate the hash...."; because a DSA signature consists of $g^k$ and $k^{-1} (H + xg^k)$; without $k$, you can't derive $H$ from the latter, and deriving $k$ from the former is a Hard Problem. – poncho Jul 14 '16 at 20:07 • @poncho That's in DSA though. I believe what Minaj was asking was whether that statement is true for RSA. I was under the impressing it is not, signatures in RSA are "decrypted" to form the original hash as part of the signature verification process. – Ajedi32 Jul 14 '16 at 21:21 • @Ajedi32: with RSA, one could recover the hash (assuming that the padding operation allows it; I believe all the commons ones will). On the other hand, I still don't like calling it "encryption", but for subtler reasons; the security policies we need during a signature operation really aren't the same as we need during a true encryption operation. – poncho Jul 14 '16 at 21:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3746897280216217, "perplexity": 1867.8886573615453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986685915.43/warc/CC-MAIN-20191018231153-20191019014653-00374.warc.gz"}
http://math.stackexchange.com/questions/117515/vector-of-normal-distributes-random-variables
# Vector of normal distributes random variables If I have $n$ random variables $X^n=(X_t^{(n)})_{t\ge 0}$, all $X^i$ normal distributed and they are independent. Now I define new processes: $$Z_t:=X^{(1)}_t+\dots+X^{(n)}_t$$ Since $X^{(i)}$ are independent, $Z_t$ is normal distributed too. My question is: Is $Z=(Z_t)_{t\ge 0}$ a Gaussian process, i.e. is for $t_1<\dots<t_n$ the random vector $(Z_{t_1},\dots,Z_{t_n})$ multivariat normal distributed? If so why? hulik - Please use $X^{(i)}$ instead of $X^i$. You may need to compute the variance of the processes at some time –  Dilip Sarwate Mar 7 '12 at 13:30 Hint: linear transformations of Gaussian processes yield Gaussian processes. –  Dilip Sarwate Mar 7 '12 at 13:33 @ Dilip Sarwate: I edited my post. Please could you give some further explanations. All I know is, that $Z_t$ is normal distributed, but how should I use here linear transformation. Thank you for your help –  user20869 Mar 8 '12 at 13:45 See for example here or here –  Dilip Sarwate Mar 8 '12 at 14:19 I write $m$ instead of $n$ for the number of time points, because $n$ is already used for the number of random processes. Since each $X^{(i)}$ is a Gaussian process, the random vector $$(X_{t_1}^{(i)},\dots X_{t_m}^{(i)})$$ has a multivariate normal distribution for each $i=1,\dots,n$. Putting $n$ independent vectors of this kind together makes a normally distributed vector of length $mn$: $$(X_{t_1}^{(1)},\dots X_{t_m}^{(1)}, X_{t_1}^{(2)},\dots X_{t_m}^{(2)}, \dots X_{t_1}^{(n)},\dots X_{t_m}^{(n)})$$ Pushforward by a linear map that sums the entries corresponding to the same time is also a normal distribution. Thus, $(Z_{t_1} ,\dots Z_{t_m} )$ is normally distributed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9462665915489197, "perplexity": 179.52279955949328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500831565.57/warc/CC-MAIN-20140820021351-00105-ip-10-180-136-8.ec2.internal.warc.gz"}
http://ccruncher.net/dissemination.html
# Dissemination Date September 2013 Quantifying Portfolio Credit Risk - CCruncher Technical Document. CCruncher technical document (version 2.3 and above). technical document Date 22 February 2013, Barcelona Multi-Factor Model Applied to SMEs. Presentation at Jornada CRM-Empresa sobre finanzas cuantitativas . presentation Date 20 June 2011, Madrid Simulation of High-Dimensional t-Student Copulas with a Given Block Correlation Matrix. Technical paper presented at ASTIN Colloquium Madrid 2011 . paper , presentation , C code Errata in corollary 2, page 5. Where says: \det(A) = \displaystyle\prod_{i=1}^{k} \lambda_i \cdot (n_i-1) \cdot (d_i-m_{ii}) it should say: \det(A) = \displaystyle\prod_{i=1}^{k} \lambda_i \cdot (d_i-m_{ii})^{(n_i-1)} Errata in definition 4, page 13. Where says: ...\frac{\zeta_i^2}{2}... it should say: ...\frac{\zeta_i^2}{\nu}... Estimation of h(ν) can be improved using the Spearman's rank formula (theorem 2). Better yet, in algorithm 5, we can apply theorem 2, doing numerical integration, to transform the Pearson correlation into the Spearman's correlation. Date July 2009 Simulating Large Portfolios of Credit: The CreditCruncher Project. Popular article published in ERCIM News  number 78 with the special theme 'Mathematics for Finance and Economy'. ERCIM News 78  (page 35) Date 2005 - 2012 CCruncher - Technical Document. CCruncher technical document (from version 0.1 to 1.9). Currently obsolete. Overridden by document 'Quantifying Portfolio Credit Risk - CCruncher Technical Document'.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9009508490562439, "perplexity": 13288.231071819451}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592654.99/warc/CC-MAIN-20180721184238-20180721204238-00042.warc.gz"}
https://darrenjw.wordpress.com/2014/06/08/tuning-particle-mcmc-algorithms/
# Tuning particle MCMC algorithms Several papers have appeared recently discussing the issue of how to tune the number of particles used in the particle filter within a particle MCMC algorithm such as particle marginal Metropolis Hastings (PMMH). Three such papers are: I have discussed psuedo marginal MCMC and particle MCMC algorithms in previous posts. It will be useful to refer back to these posts if these topics are unfamiliar. Within particle MCMC algorithms (and psuedo-marginal MCMC algorithms, more generally), an unbiased estimate of marginal likelihood is constructed using a number of particles. The more particles that are used, the better the estimate of marginal likelihood is, and the resulting MCMC algorithm will behave more like a “real” marginal MCMC algorithm. For a small number of particles, the algorithm will still have exactly the correct target, but the noise in the unbiased estimator of marginal likelihood will lead to poor mixing of the MCMC chain. The idea is to use just enough particles to ensure that there isn’t “too much” noise in the unbiased estimator, but not to waste lots of time producing a super-accurate estimate of marginal likelihood if that isn’t necessary to ensure good mixing of the MCMC chain. The papers above try to give theoretical justifications for certain “rules of thumb” that are commonly used in practice. One widely adopted scheme is to tune the number of particles so that the variance of the log of the estimate of marginal liklihood is around one. The obvious questions are “where?” and “why?”, and these questions turn out to be connected. As we will see, there isn’t really a good answer to the “where?” question, but what people usually do is use a pilot run to get an estimate of the posterior mean, or mode, or MLE, and then pick one and tune the noise variance at that particular parameter value. As to “why?”, well, the papers above make various (slightly different) assumptions, all of which lead to trading off mixing against computation time to obtain an “optimal” number of particles. They don’t all agree that the variance of the noise should be exactly 1, but they all agree to an order of magnitude. All of the above papers make the assumption that the noise distribution associated with the marginal likelihood estimate is independent of the parameter at which it is being evaluated, which explains why there isn’t a really good answer to the “where?” question – under the assumption it doesn’t matter what parameter value is used for tuning – they are all the same! Easy. Except that’s quite a big assumption, so it would be nice to know that it is reasonable, and unfortunately it isn’t. Let’s look at an example to see what goes wrong. #### Example In Chapter 10 of my book I look in detail at constructing a PMMH algorithm for inferring the parameters of a discretely observed stochastic Lotka-Volterra model. I’ve stepped through the computational details in a previous post which you should refer back to for the necessary background. Following that post, we can construct a particle filter to return an unbiased estimate of marginal likelihood using the following R code (which relies on the smfsb CRAN package): ```require(smfsb) # data data(LVdata) data=as.timedData(LVnoise10) noiseSD=10 # measurement error model dataLik <- function(x,t,y,log=TRUE,...) { ll=sum(dnorm(y,x,noiseSD,log=TRUE)) if (log) return(ll) else return(exp(ll)) } # now define a sampler for the prior on the initial state simx0 <- function(N,t0,...) { mat=cbind(rpois(N,50),rpois(N,100)) colnames(mat)=c("x1","x2") mat } # construct particle filter mLLik=pfMLLik(150,simx0,0,stepLVc,dataLik,data) ``` Again, see the relevant previous post for details. So now mLLik() is a function that will return the log of an unbiased estimate of marginal likelihood (based on 150 particles) given a parameter value at which to evaluate. What we are currently wondering is whether the noise in the estimate is independent of the parameter at which it is evaluated. We can investigate this for this filter easily by looking at how the estimate varies as the first parameter (prey birth rate) varies. The following code computes a log likelihood estimate across a range of values and plots the result. ```mLLik1=function(x){mLLik(th=c(th1=x,th2=0.005,th3=0.6))} x=seq(0.7,1.3,length.out=5001) y=sapply(x,mLLik1) plot(x[y>-1e10],y[y>-1e10]) ``` The resulting plot is as follows (click for full size): So, looking at the plot, it is very clear that the noise variance certainly isn’t constant as the parameter varies – it varies substantially. Furthermore, the way in which it varies is “dangerous”, in that the noise is smallest in the vicinity of the MLE. So, if a parameter close to the MLE is chosen for tuning the number of particles, this will ensure that the noise is small close to the MLE, but not elsewhere in parameter space. This could have bad consequences for the mixing of the MCMC algorithm as it explores the tails of the posterior distribution. So with the above in mind, how should one tune the number of particles in a pMCMC algorithm? I can’t give a general answer, but I can explain what I do. We can’t rely on theory, so a pragmatic approach is required. The above rule of thumb usually gives a good starting point for exploration. Then I just directly optimise ESS per CPU second of the pMCMC algorithm from pilot runs for varying numbers of particles (and other tuning parameters in the algorithm). ESS is “expected sample size”, which can be estimated using the effectiveSize() function in the coda CRAN package. Ugly and brutish, but it works… ### darrenjw I am Professor of Stochastic Modelling within the School of Mathematics & Statistics at Newcastle University, UK. I am also a computational systems biologist. ## 11 thoughts on “Tuning particle MCMC algorithms” 1. I do something even more rough: for a fixed location, I plot the variance of the marginal likelihood (Y axis) against the number of particles and I check where the curve levels off. To me the above slice of the marginal likelihood suggests that using a ML or MAP estimator in this context might be preferable to a full MCMC analysis. Even more so if the computational budget is limited… any thoughts? Matteo 2. The problem is that the number of particles/samples is assumed constant across the parameter theta. If the variance of the log-likelihood estimate is easy to compute, then the optimal number of particles can be selected as a function of the parameter theta so that the variance of the log-likelihood estimate equals 1. See http://arxiv.org/abs/1309.3339 for a detailed discussion. 3. Arnaud Doucet & Mike Pitt says: Hi Darren, We do agree with you that the variance of the log-likelihood estimator is not constant across the parameter space. However, in practice the issue is whether the variance is approximately constant under the posterior for theta. As T, the number of observations, increases then the posterior will typically concentrate around a central value at the rate 1/sqrt(T). Therefore, under reasonable assumptions, the variance (which is a function of theta) will itself become more concentrated around a central value of choice. Results formalizing this intuition are given in the previous version of arXiv:1210.1871; see Lemma 4 in Section 6 and the associated figures. (This section has been suppressed in the third version for space reasons). Your graph is not reflecting this point as you are only displaying the log-likelihood estimate over a grid of values of theta, rather than over posterior samples from theta. You are considering a log-likelihood range of about 300 whereas to reflect posterior support one would normally consider a range of 3 to 4 (under a reasonable prior). By just looking at your graph, it is clear that this would reduce substantially the range of relevant theta and consequently the variability of the associated variance across this interval. You can find at http://www.stats.ox.ac.uk/~doucet/ExaminingAssumption.pdf histograms of the log-likelihood estimator error evaluated at the posterior mean. We also display histograms of the log-likelihood estimator error over the posterior for the parameter. We do this for various values of T (number of data samples) and number of particles N. For T small, the posterior over the parameters is fairly diffuse and neither our constant assumption for the variance nor the normality distributional assumption for the error hold. As T increases, the distribution of the log-likelihood estimate evaluated at the posterior mean and the distribution of the log-likelihood estimator error under the posterior are essentially indistinguishable and correspond very closely to the postulated normal distribution as expected. We present these results for a simple AR(1) Gaussian process observed in Gaussian noise but also for a complex continuous-time two-factor stochastic volatility model applied to real data. Hence we believe that in for moderate to large data one can rely on theory and that the guidelines we provide are useful. Cordially, Arnaud Doucet & Mike Pitt 1. Hi guys, Thanks for your considered (and polite!) response. Your intuition about the large T limit may well be correct, but I’m often interested in small T. There I’m not totally convinced by results based on 100 samples from the posterior. I’m sure we can all agree that there is less of a problem if the posterior has compact support. My concern is that the variance of the estimator increases (and is unbounded?) as the MCMC chain takes an excursion into the tails of the posterior. For the problem in my post, the corresponding posterior is shown in Figure 10.5 of my book, and the summary stats for it are shown on page 298. Near the posterior mode, at around 0.95, the variance of the log-likelihood estimator is roughly 1. However, at 0.84 (around the smallest value my MCMC sampler visited), the variance is around 17. This may be a slightly less dramatic difference than a naive look at my plot would suggest, but it’s non-trivial. It certainly makes a difference whether you tune the number of particles at 0.84 or 0.95. Cheers, Darren > x=rep(0.84,1000) > y=sapply(x,mLLik1) > var(y) [1] 16.96225 > x=rep(0.95,1000) > y=sapply(x,mLLik1) > var(y) [1] 0.9849833 1. Mike Pitt says: Hi Darren, You are right in that it does clearly make a large difference (16 fold) whether you choose N based upon the posterior mean or the tail value. However, if you choose based upon the posterior mean then it is clear (from your graph) that a random walk, RW, proposal will almost always accept moves from the tails towards the posterior. This can be seen because pi(theta’)_/pi(theta), with theta’ being the proposed value near 0.95 and theta being out in the tails (say, 0.84) will be very large relative to exp(z’-z), where z is the log-likelihood error. So the performance (stickiness) of the scheme will be governed by what is happening near the mean of theta. If you were to implement a RW particle scheme for your model and to plot the IACT against sigma (the SD evaluated at the posterior mean, 0.95) by varying N then I would imagine this would be minimised around the values suggested in the papers. Best wishes, Mike Pitt 2. Yes, I agree – the PMMH sampler for this filter is fine. I generally find that tuning the variance of the noise at the posterior mean is usually OK, as in fact the overall performance (ESS/sec) is relatively insensitive to the precise value of N over a reasonable vicinity of the optimum. But knowing that the noise can vary substantially, I always like to check… Cheers, 4. Mike Pitt says: For some models it is possible, as Minh Ngoc indicates, to adaptively alter N with the proposed value of theta to try to keep sigma constant. If each section of the likelihood, for individuals t=1,..,T, can be estimated via importance sampling then a closed form estimator of the variance can be easily implemented simply from the weights used in the estimator itself. For particle filters of course the situation is more complicated and any solution may be more expensive than just keeping N fixed… Cheers, Mike 5. Dear Darren, I have not yet really done anything with particle filters, but clearly, I am interested in having a try. Right now, my (modest) objective is to get a qualitative idea of how the approach scales with the size of the problem (T). So it is not exactly the subject of your post, but still it is a related issue. Is it correct to say that, under relatively general conditions, the complexity of a particle MH will typically scale as O(T^2) ? That is, the variance of the estimator of the ratio of likelihoods, or equivalently of the variance of the estimator of the log likelihood, will be in T/N, where N is the number of particles, and this is more or less what you want to control in order to keep the acceptance rate afloat. Thus, the number of particles should increase linearly with T, so that the resulting mcmc is in O(NT) = O(T^2). This seems to be suggested by some of the articles you mention, but I am not yet sufficiently comfortable with the whole thing to be sure that I understand correctly. 1. Yes, the folklore (supported by theory) is that the number of particles in your filter should scale linearly with the number of time-points, at least when being used inside pMCMC algorithms. This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9082134366035461, "perplexity": 457.2958997422354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989616.38/warc/CC-MAIN-20210513234920-20210514024920-00343.warc.gz"}
http://bioconductor.org/help/course-materials/2016/BKU/S4_RNASeq.html
Contents • Bioconductor known-gene RNA-seq differential expression work flow, from aligned reads to differential expression of genes. Important statistical issues and their resolution. Placing results of differential expression analysis into biological context. Brief discussion of novel-gene and transcript-level RNAseq differential expression analysis. Primary packages: DESeq2, edgeR. 1 Presentation: RNA-seq work flow 1.1 Experimental design Keep it simple • Classical experimental designs • Time series • Without missing values, where possible • Intended analysis must be feasbile – can the available samples and hypothesis of interest be combined to formulate a testable statistical hypothesis? Replicate • Extent of replication determines nuance of biological question. • No replication (1 sample per treatment): qualitative description with limited statistical options. • 3-5 replicates per treatment: designed experimental manipulation with cell lines or other well-defined entities; 2-fold (?) change in average expression between groups. • 10-50 replicates per treatment: population studies, e.g., cancer cell lines. • 1000’s of replicates: prospective studies, e.g., SNP discovery • One resource: RNASeqPower Avoid confounding experimental factors with other factors • Common problems: samples from one treatment all on the same flow cell; samples from treatment 1 processed first, treatment 2 processed second, etc. Record co-variates Be aware of batch effects • Known • Phenotypic covariates, e.g., age, gender • Experimental covariates, e.g., lab or date of processing • Incorporate into linear model, at least approximately • Unknown • Or just unexpected / undetected • Characterize using, e.g., sva. • Surrogate variable analysis • Leek et al., 2010, Nature Reviews Genetics 11 733-739, Leek & Story PLoS Genet 3(9): e161. • Scientific finding: pervasive batch effects • Statistical insights: surrogate variable analysis: identify and build surrogate variables; remove known batch effects • Benefits: reduce dependence, stabilize error rate estimates, and improve reproducibility • combat software / sva Bioconductor package HapMap samples from one facility, ordered by date of processing. 1.2 Wet-lab Confounding factors • Record or avoid • Sequence contaminants • Enrichment bias, e.g., non-uniform transcript representation. • PCR artifacts – adapter contaminants, sequence-specific amplification bias, … 1.3 Sequencing Axes of variation • Single- versus paired-end • Length: 50-200nt • Number of reads per sample Application-specific, e.g., • ChIP-seq: short, single-end reads are usually sufficient • RNA-seq, known genes: single- or paired-end reads • RNA-seq, transcripts or novel variants: paired-end reads • Copy number: single- or paired-end reads • Variants: depth via longer, paired-end reads • Microbiome: long paired-end reads (overlapping ends) 1.4 Alignment Alignment strategies • de novo • No reference genome; considerable sequencing and computational resources • Genome • Established reference genome • Splice-aware aligners • Novel transcript discovery • Transcriptome • Established reference genome; reliable gene model • Simple aligners • Known gene / transcript expression Splice-aware aligners (and Bioconductor wrappers) 1.5 Reduction to ‘count tables’ • Use known gene model to count aligned reads overlapping regions of interest / gene models • Gene model can be public (e.g., UCSC, NCBI, ENSEMBL) or ad hoc (gff file) • GenomicAlignments::summarizeOverlaps() • Rsubread::featureCount() • HTSeq, htseq-count 1.5.2 (kallisto / sailfish) • ‘Next generation’ differential expression tools; transcriptome alignment • E.g., kallisto takes a radically different approach: from FASTQ to count table without BAM files. • Very fast, almost as accurate. • Hints on how it works; arXiv • Integration with gene-level analyses – Soneson et al. 1.6 Analysis Unique statistical aspects • Large data, few samples • Comparison of each gene, across samples; univariate measures • Each gene is analyzed by the same experimental design, under the same null hypothesis Summarization • Counts per se, rather than a summary (RPKM, FPKM, …), are relevant for analysis • For a given gene, larger counts imply more information; RPKM etc., treat all estimates as equally informative. • Comparison is across samples at each region of interest; all samples have the same region of interest, so modulo library size there is no need to correct for, e.g., gene length or mapability. Normalization • Libraries differ in size (total counted reads per sample) for un-interesting reasons; we need to account for differences in library size in statistical analysis. • Total number of counted reads per sample is not a good estimate of library size. It is un-necessarily influenced by regions with large counts, and can introduce bias and correlation across genes. Instead, use a robust measure of library size that takes account of skew in the distribution of counts (simplest: trimmed geometric mean; more advanced / appropriate encountered in the lab). • Library size (total number of counted reads) differs between samples, and should be included as a statistical offset in analysis of differential expression, rather than ‘dividing by’ the library size early in an analysis. Appropriate error model • Count data is not distributed normally or as a Poisson process, but rather as negative binomial. • Result of a combination Poisson (shot’ noise, i.e., within-sample technical and sampling variation in read counts) with variation between biological samples. • A negative binomial model requires estimation of an additional parameter (‘dispersion’), which is estimated poorly in small samples. • Basic strategy is to moderate per-gene estimates with more robust local estimates derived from genes with similar expression values (a little more on borrowing information is provided below). Pre-filtering • Naively, a statistical test (e.g., t-test) could be applied to each row of a counts table. However, we have relatively few samples (10’s) and very many comparisons (10,000’s) so a naive approach is likely to be very underpowered, resulting in a very high false discovery rate • A simple approach is perform fewer tests by removing regions that could not possibly result in statistical significance, regardless of hypothesis under consideration. • Example: a region with 0 counts in all samples could not possibly be significant regradless of hypothesis, so exclude from further analysis. • Basic approaches: ‘K over A’-style filter – require a minimum of A (normalized) read counts in at least K samples. Variance filter, e.g., IQR (inter-quartile range) provides a robust estimate of variability; can be used to rank and discard least-varying regions. • More nuanced approaches: edgeR vignette; work flow today. Borrowing information • Why does low statistical power elevate false discovery rate? • One way of developing intuition is to recognize a t-test (for example) as a ratio of variances. The numerator is treatment-specific, but the denominator is a measure of overall variability. • Variances are measured with uncertainty; over- or under-estimating the denominator variance has an asymmetric effect on a t-statistic or similar ratio, with an underestimate inflating the statistic more dramatically than an overestimate deflates the statistic. Hence elevated false discovery rate. • Under the null hypothesis used in microarray or RNA-seq experiments, the expected overall variability of a gene is the same, at least for genes with similar average expression • The strategy is to estimate the denominator variance as the between-group variance for the gene, moderated by the average between-group variance across all genes. • This strategy exploits the fact that the same experimental design has been applied to all genes assayed, and is effective at moderating false discovery rate. 1.7 Statistical Issues In-depth: Normalization and Dispersion 1.7.1 Normalization DESeq2 estimateSizeFactors(), Anders and Huber, 2010 • For each gene: geometric mean of all samples. • For each sample: median ratio of the sample gene over the geometric mean of all samples • Functions other than the median can be used; control genes can be used instead 1.7.2 Dispersion DESeq2 estimateDispersions() • Estimate per-gene dispersion • Fit a smoothed relationship between dispersion and abundance 1.8 Comprehension Placing differentially expressed regions in context • Gene names associated with genomic ranges • Gene set enrichment and similar analysis • Proximity to regulatory marks • Integrate with other analyses, e.g., methylation, copy number, variants, … Correlation between genomic copy number and mRNA expression identified 38 mis-labeled samples in the TCGA ovarian cancer Affymetrix microarray dataset. 2 Lab: Gene-level RNA-seq differential expression 2.1 Background This lab is derived from: RNA-Seq workflow: gene-level exploratory analysis and differential expression, by Michael Love, Simon Anders, Wolfgang Huber; modified by Martin Morgan, October 2015. This lab will walk you through an end-to-end RNA-Seq differential expression workflow, using DESeq2 along with other Bioconductor packages. The complete work flow starts from the FASTQ files, but we will start after reads have been aligned to a reference genome and reads overlapping known genes have been counted. We will perform exploratory data analysis (EDA), differential gene expression analysis with DESeq2, and visually explore the results. A number of other Bioconductor packages are important in statistical inference of differential expression at the gene level, including Rsubread, edgeR, limma, BaySeq, and others. 2.2 Experimental data The data used in this workflow is an RNA-Seq experiment of airway smooth muscle cells treated with dexamethasone, a synthetic glucocorticoid steroid with anti-inflammatory effects. Glucocorticoids are used, for example, in asthma patients to prevent or reduce inflammation of the airways. In the experiment, four primary human airway smooth muscle cell lines were treated with 1 micromolar dexamethasone for 18 hours. For each of the four cell lines, we have a treated and an untreated sample. The reference for the experiment is: Himes BE, Jiang X, Wagner P, Hu R, Wang Q, Klanderman B, Whitaker RM, Duan Q, Lasky-Su J, Nikolos C, Jester W, Johnson M, Panettieri R Jr, Tantisira KG, Weiss ST, Lu Q. “RNA-Seq Transcriptome Profiling Identifies CRISPLD2 as a Glucocorticoid Responsive Gene that Modulates Cytokine Function in Airway Smooth Muscle Cells.” PLoS One. 2014 Jun 13;9(6):e99625. PMID: 24926665. GEO: GSE52778. 2.3 Preparing count matrices As input, DESeq2 package expects count data as obtained, e.g., from RNA-Seq or another high-throughput sequencing experiment, in the form of a matrix of integer values. The value in the i-th row and the j-th column of the matrix tells how many reads have been mapped to gene i in sample j. Analogously, for other types of assays, the rows of the matrix might correspond e.g., to binding regions (with ChIP-Seq) or peptide sequences (with quantitative mass spectrometry). The count values must be raw counts of sequencing reads. This is important for DESeq2’s statistical model to hold, as only the actual counts allow assessing the measurement precision correctly. Hence, please do not supply other quantities, such as (rounded) normalized counts, or counts of covered base pairs – this will only lead to nonsensical results. We will discuss how to summarize data from BAM files to a count table later in ther course. Here we’ll ‘jump right in’ and start with a prepared SummarizedExperiment. 2.4 Starting from SummarizedExperiment We now use R’s data() command to load a prepared SummarizedExperiment that was generated from the publicly available sequencing data files associated with the Himes et al. paper, described above. The steps we used to produce this object were equivalent to those you worked through in the previous sections, except that we used all the reads and all the genes. For more details on the exact steps used to create this object type vignette("airway") into your R session. library(airway) data("airway") se <- airway The information in a SummarizedExperiment object can be accessed with accessor functions. For example, to see the actual data, i.e., here, the read counts, we use the assay() function. (The head() function restricts the output to the first few lines.) head(assay(se)) ## SRR1039508 SRR1039509 SRR1039512 SRR1039513 SRR1039516 SRR1039517 SRR1039520 ## ENSG00000000003 679 448 873 408 1138 1047 770 ## ENSG00000000005 0 0 0 0 0 0 0 ## ENSG00000000419 467 515 621 365 587 799 417 ## ENSG00000000457 260 211 263 164 245 331 233 ## ENSG00000000460 60 55 40 35 78 63 76 ## ENSG00000000938 0 0 2 0 1 0 0 ## SRR1039521 ## ENSG00000000003 572 ## ENSG00000000005 0 ## ENSG00000000419 508 ## ENSG00000000457 229 ## ENSG00000000460 60 ## ENSG00000000938 0 In this count matrix, each row represents an Ensembl gene, each column a sequenced RNA library, and the values give the raw numbers of sequencing reads that were mapped to the respective gene in each library. We also have metadata on each of the samples (the columns of the count matrix). If you’ve counted reads with some other software, you need to check at this step that the columns of the count matrix correspond to the rows of the column metadata. We can quickly check the millions of fragments which uniquely aligned to the genes. colSums(assay(se)) ## SRR1039508 SRR1039509 SRR1039512 SRR1039513 SRR1039516 SRR1039517 SRR1039520 SRR1039521 ## 20637971 18809481 25348649 15163415 24448408 30818215 19126151 21164133 Supposing we have constructed a SummarizedExperiment using one of the methods described in the previous section, we now need to make sure that the object contains all the necessary information about the samples, i.e., a table with metadata on the count matrix’s columns stored in the colData slot: colData(se) ## DataFrame with 8 rows and 9 columns ## SampleName cell dex albut Run avgLength Experiment Sample ## <factor> <factor> <factor> <factor> <factor> <integer> <factor> <factor> ## SRR1039508 GSM1275862 N61311 untrt untrt SRR1039508 126 SRX384345 SRS508568 ## SRR1039509 GSM1275863 N61311 trt untrt SRR1039509 126 SRX384346 SRS508567 ## SRR1039512 GSM1275866 N052611 untrt untrt SRR1039512 126 SRX384349 SRS508571 ## SRR1039513 GSM1275867 N052611 trt untrt SRR1039513 87 SRX384350 SRS508572 ## SRR1039516 GSM1275870 N080611 untrt untrt SRR1039516 120 SRX384353 SRS508575 ## SRR1039517 GSM1275871 N080611 trt untrt SRR1039517 126 SRX384354 SRS508576 ## SRR1039520 GSM1275874 N061011 untrt untrt SRR1039520 101 SRX384357 SRS508579 ## SRR1039521 GSM1275875 N061011 trt untrt SRR1039521 98 SRX384358 SRS508580 ## BioSample ## <factor> ## SRR1039508 SAMN02422669 ## SRR1039509 SAMN02422675 ## SRR1039512 SAMN02422678 ## SRR1039513 SAMN02422670 ## SRR1039516 SAMN02422682 ## SRR1039517 SAMN02422673 ## SRR1039520 SAMN02422683 ## SRR1039521 SAMN02422677 Here we see that this object already contains an informative colData slot – because we have already prepared it for you, as described in the airway vignette. However, when you work with your own data, you will have to add the pertinent sample / phenotypic information for the experiment at this stage. We highly recommend keeping this information in a comma-separated value (CSV) or tab-separated value (TSV) file, which can be exported from an Excel spreadsheet, and the assign this to the colData slot, making sure that the rows correspond to the columns of the SummarizedExperiment. We made sure of this correspondence by specifying the BAM files using a column of the sample table. Check out the rowRanges() of the summarized experiment; these are the genomic ranges over which counting occurred. rowRanges(se) ## GRangesList object of length 64102: ## $ENSG00000000003 ## GRanges object with 17 ranges and 2 metadata columns: ## seqnames ranges strand | exon_id exon_name ## <Rle> <IRanges> <Rle> | <integer> <character> ## [1] X [99883667, 99884983] - | 667145 ENSE00001459322 ## [2] X [99885756, 99885863] - | 667146 ENSE00000868868 ## [3] X [99887482, 99887565] - | 667147 ENSE00000401072 ## [4] X [99887538, 99887565] - | 667148 ENSE00001849132 ## [5] X [99888402, 99888536] - | 667149 ENSE00003554016 ## ... ... ... ... . ... ... ## [13] X [99890555, 99890743] - | 667156 ENSE00003512331 ## [14] X [99891188, 99891686] - | 667158 ENSE00001886883 ## [15] X [99891605, 99891803] - | 667159 ENSE00001855382 ## [16] X [99891790, 99892101] - | 667160 ENSE00001863395 ## [17] X [99894942, 99894988] - | 667161 ENSE00001828996 ## ## ... ## <64101 more elements> ## ------- ## seqinfo: 722 sequences (1 circular) from an unspecified genome Let’s look at basic properties of the data, especially in relation to the statistical factors known to be important in RNAseq differential expression analysis. The library size is the total number of reads mapped per sample. Use colSums() on the assay() data to summarize library size. colSums(assay(airway)) ## SRR1039508 SRR1039509 SRR1039512 SRR1039513 SRR1039516 SRR1039517 SRR1039520 SRR1039521 ## 20637971 18809481 25348649 15163415 24448408 30818215 19126151 21164133 1. How does library size vary between samples? 2. Why will it be important to incorporate library size in assessing differential expression? 3. While easy to understand, why is simple scaling by total library size unsatisfactory from a statistical perspective? 4. What different approaches might be taken to estimate library size? 5. What statistical approaches might be taken to incorporate library size? 6. (Answer after completing the DESeq2 workflow) What approach does DESeq2 take to (a) estimate library size; and (b) incorporate library size into differential expression analysis? Use rowMeans() on the assay() data and either hist() or plot(density()) to display the distribution of average gene expression across all genes. It may be helpful to transform the data, and to exclude genes with very few counts in all samples. means <- rowMeans(assay(airway)) xlim <- range(log(1 + means)) plot(density(log(1 + means)), xlim=xlim) plot(density(log(1 + means[means > 1])), xlim=xlim) 1. It’s clear that a gene without any expression in any sample cannot possibly be differential expressed, independent of the hypothesis under investigation. How many genes are excluded by this criterion? What is the advantage of excluding these genes a priori, before any hypothesis is evaluated? 2. (Answer after completing the DESeq2 workflow) By extension, it seems intuitive that there is a threshold level of expression below which differential gene expression cannot be detected, independent of the hypothesis under investigation. How does DESeq2 address this? An MDS plot attempts to represent the distance between N-dimensional points projected to 2 or 3 dimensions. d <- dist(t(log(1 + assay(airway)))) mds <- cmdscale(d) plot(mds, pch=20, asp=1, cex=2) plot(mds, pch=20, asp=1, cex=2, col=airway$cell) plot(mds, pch=20, asp=1, cex=2, col=airway$dex) 1. Calculate the (Euclidean) distance between samples and use multi-dimensional scaling to represent these distances in two dimensions. 2. In an exploratory fashion, color points based on cell line (airway$cell) and experimental treatment (airway$dex). 3. Interpret these plots, and suggest how these might inform subsequent statistical analysis. If counts were Poisson distributed, there would be a linear relationship between the mean and variance of counts. Can you demonstrate in a straight-forward way that this is not the case? rowVars <- matrixStats::rowVars plot(rowVars(1 + assay(airway)) ~ rowMeans(1 + assay(airway)), log="xy") ## Warning in xy.coords(x, y, xlabel, ylabel, log): 30633 y values <= 0 omitted from logarithmic plot 2.5 From SummarizedExperiment to DESeqDataSet We will use the DESeq2 package for assessing differential expression. The package uses an extended version of the SummarizedExperiment class, called DESeqDataSet. It’s easy to go from a SummarizedExperiment to DESeqDataSet: library("DESeq2") dds <- DESeqDataSet(se, design = ~ cell + dex) The ‘design’ argument is a formula which expresses how the counts for each gene depend on the variables in colData. Remember you can always get information on method arguments with ?, e.g ?DESeqDataSet. 2.6 Differential expression analysis It will be convenient to make sure that untrt is the first level in the dex factor, so that the default log2 fold changes are calculated as treated over untreated (by default R will chose the first alphabetical level, remember: computers don’t know what to do unless you tell them). The function relevel() achieves this: dds$dex <- relevel(dds$dex, "untrt") In addition, if you have at any point subset the columns of the DESeqDataSet you should similarly call droplevels() on the factors if the subsetting has resulted in some levels having 0 samples. 2.6.1 Running the pipeline Finally, we are ready to run the differential expression pipeline. With the data object prepared, the DESeq2 analysis can now be run with a single call to the function DESeq(): dds <- DESeq(dds) ## estimating size factors ## estimating dispersions ## gene-wise dispersion estimates ## mean-dispersion relationship ## final dispersion estimates ## fitting model and testing This function will print out a message for the various steps it performs. These are described in more detail in the manual page ?DESeq. Briefly these are: the estimation of size factors (which control for differences in the library size of the sequencing experiments), the estimation of dispersion for each gene, and fitting a generalized linear model. A DESeqDataSet is returned which contains all the fitted information within it, and the following section describes how to extract out result tables of interest from this object. 2.6.2 Building the results table Calling results() without any arguments will extract the estimated log2 fold changes and p values for the last variable in the design formula. If there are more than 2 levels for this variable, results() will extract the results table for a comparison of the last level over the first level. (res <- results(dds)) ## log2 fold change (MAP): dex trt vs untrt ## Wald test p-value: dex trt vs untrt ## DataFrame with 64102 rows and 6 columns ## baseMean log2FoldChange lfcSE stat pvalue padj ## <numeric> <numeric> <numeric> <numeric> <numeric> <numeric> ## ENSG00000000003 708.60217 -0.37415246 0.09884435 -3.7852692 0.0001535422 0.001289269 ## ENSG00000000005 0.00000 NA NA NA NA NA ## ENSG00000000419 520.29790 0.20206175 0.10974241 1.8412367 0.0655868755 0.197066699 ## ENSG00000000457 237.16304 0.03616686 0.13834540 0.2614244 0.7937652378 0.913855995 ## ENSG00000000460 57.93263 -0.08445399 0.24990710 -0.3379415 0.7354072485 0.884141561 ## ... ... ... ... ... ... ... ## LRG_94 0 NA NA NA NA NA ## LRG_96 0 NA NA NA NA NA ## LRG_97 0 NA NA NA NA NA ## LRG_98 0 NA NA NA NA NA ## LRG_99 0 NA NA NA NA NA As res is a DataFrame object, it carries metadata with information on the meaning of the columns: mcols(res, use.names=TRUE) ## DataFrame with 6 rows and 2 columns ## type description ## <character> <character> ## baseMean intermediate mean of normalized counts for all samples ## log2FoldChange results log2 fold change (MAP): dex trt vs untrt ## lfcSE results standard error: dex trt vs untrt ## stat results Wald statistic: dex trt vs untrt ## pvalue results Wald test p-value: dex trt vs untrt ## padj results BH adjusted p-values The first column, baseMean, is a just the average of the normalized count values, dividing by size factors, taken over all samples. The remaining four columns refer to a specific contrast, namely the comparison of the trt level over the untrt level for the factor variable dex. See the help page for results() (by typing ?results) for information on how to obtain other contrasts. The column log2FoldChange is the effect size estimate. It tells us how much the gene’s expression seems to have changed due to treatment with dexamethasone in comparison to untreated samples. This value is reported on a logarithmic scale to base 2: for example, a log2 fold change of 1.5 means that the gene’s expression is increased by a multiplicative factor of $$2^{1.5} \approx 2.82$$. Of course, this estimate has an uncertainty associated with it, which is available in the column lfcSE, the standard error estimate for the log2 fold change estimate. We can also express the uncertainty of a particular effect size estimate as the result of a statistical test. The purpose of a test for differential expression is to test whether the data provides sufficient evidence to conclude that this value is really different from zero. DESeq2 performs for each gene a hypothesis test to see whether evidence is sufficient to decide against the null hypothesis that there is no effect of the treatment on the gene and that the observed difference between treatment and control was merely caused by experimental variability (i.e., the type of variability that you can just as well expect between different samples in the same treatment group). As usual in statistics, the result of this test is reported as a p value, and it is found in the column pvalue. (Remember that a p value indicates the probability that a fold change as strong as the observed one, or even stronger, would be seen under the situation described by the null hypothesis.) We can also summarize the results with the following line of code, which reports some additional information. summary(res) ## ## out of 33469 with nonzero total read count ## adjusted p-value < 0.1 ## LFC > 0 (up) : 2617, 7.8% ## LFC < 0 (down) : 2203, 6.6% ## outliers [1] : 0, 0% ## low counts [2] : 15441, 46% ## (mean count < 5) ## [1] see 'cooksCutoff' argument of ?results ## [2] see 'independentFiltering' argument of ?results Note that there are many genes with differential expression due to dexamethasone treatment at the FDR level of 10%. This makes sense, as the smooth muscle cells of the airway are known to react to glucocorticoid steroids. However, there are two ways to be more strict about which set of genes are considered significant: • lower the false discovery rate threshold (the threshold on padj in the results table) • raise the log2 fold change threshold from 0 using the lfcThreshold argument of results(). See the DESeq2 vignette for a demonstration of the use of this argument. Sometimes a subset of the p values in res will be NA (“not available”). This is DESeq()’s way of reporting that all counts for this gene were zero, and hence not test was applied. In addition, p values can be assigned NA if the gene was excluded from analysis because it contained an extreme count outlier. For more information, see the outlier detection section of the vignette. 2.6.3 Multiple testing Novices in high-throughput biology often assume that thresholding these p values at a low value, say 0.05, as is often done in other settings, would be appropriate – but it is not. We briefly explain why: There are 5648 genes with a p value below 0.05 among the 33469 genes, for which the test succeeded in reporting a p value: sum(res$pvalue < 0.05, na.rm=TRUE) ## [1] 5648 sum(!is.na(res$pvalue)) ## [1] 33469 Now, assume for a moment that the null hypothesis is true for all genes, i.e., no gene is affected by the treatment with dexamethasone. Then, by the definition of p value, we expect up to 5% of the genes to have a p value below 0.05. This amounts to 1673 genes. If we just considered the list of genes with a p value below 0.05 as differentially expressed, this list should therefore be expected to contain up to 1673 / 5648 = 30% false positives. DESeq2 uses the Benjamini-Hochberg (BH) adjustment as described in the base R p.adjust function; in brief, this method calculates for each gene an adjusted p value which answers the following question: if one called significant all genes with a p value less than or equal to this gene’s p value threshold, what would be the fraction of false positives (the false discovery rate, FDR) among them (in the sense of the calculation outlined above)? These values, called the BH-adjusted p values, are given in the column padj of the res object. Hence, if we consider a fraction of 10% false positives acceptable, we can consider all genes with an adjusted p value below $$10% = 0.1$$ as significant. How many such genes are there? sum(res$padj < 0.1, na.rm=TRUE) ## [1] 4820 We subset the results table to these genes and then sort it by the log2 fold change estimate to get the significant genes with the strongest down-regulation. resSig <- subset(res, padj < 0.1) head(resSig[ order( resSig$log2FoldChange ), ]) ## log2 fold change (MAP): dex trt vs untrt ## Wald test p-value: dex trt vs untrt ## DataFrame with 6 rows and 6 columns ## baseMean log2FoldChange lfcSE stat pvalue padj ## <numeric> <numeric> <numeric> <numeric> <numeric> <numeric> ## ENSG00000162692 508.17023 -3.449475 0.1767138 -19.520120 7.406383e-85 9.537305e-82 ## ENSG00000105989 333.21469 -2.847364 0.1763098 -16.149771 1.139834e-58 5.871121e-56 ## ENSG00000146006 46.80760 -2.828093 0.3377003 -8.374564 5.542861e-17 2.708041e-15 ## ENSG00000214814 243.27698 -2.753559 0.2235537 -12.317215 7.317772e-35 1.232942e-32 ## ENSG00000267339 26.23357 -2.704476 0.3519722 -7.683776 1.544666e-14 5.924944e-13 ## ENSG00000013293 244.49733 -2.641050 0.1992872 -13.252481 4.365309e-40 8.842448e-38 …and with the strongest upregulation. The order() function gives the indices in increasing order, so a simple way to ask for decreasing order is to add a - sign. Alternatively, you can use the argument decreasing=TRUE. head(resSig[ order( -resSig$log2FoldChange ), ]) ## log2 fold change (MAP): dex trt vs untrt ## Wald test p-value: dex trt vs untrt ## DataFrame with 6 rows and 6 columns ## baseMean log2FoldChange lfcSE stat pvalue padj ## <numeric> <numeric> <numeric> <numeric> <numeric> <numeric> ## ENSG00000109906 385.07103 4.847164 0.3313657 14.62784 1.866158e-48 5.902298e-46 ## ENSG00000179593 67.24305 4.830815 0.3314192 14.57615 3.983670e-48 1.196960e-45 ## ENSG00000152583 997.43977 4.313961 0.1721375 25.06114 1.320082e-138 2.379844e-134 ## ENSG00000163884 561.10717 4.074297 0.2104708 19.35802 1.744626e-83 1.965757e-80 ## ENSG00000250978 56.31819 4.054731 0.3294746 12.30666 8.340703e-35 1.392280e-32 ## ENSG00000168309 159.52692 3.977191 0.2558532 15.54481 1.725211e-54 7.775524e-52 2.7 Diagnostic plots A quick way to visualize the counts for a particular gene is to use the plotCounts() function, which takes as arguments the DESeqDataSet, a gene name, and the group over which to plot the counts. topGene <- rownames(res)[which.min(res$padj)] data <- plotCounts(dds, gene=topGene, intgroup=c("dex"), returnData=TRUE) We can also make more customizable plots using the ggplot() function from the ggplot2 package: library(ggplot2) ggplot(data, aes(x=dex, y=count, fill=dex)) + scale_y_log10() + geom_dotplot(binaxis="y", stackdir="center") ## stat_bindot() using bins = 30. Pick better value with binwidth. An “MA-plot” provides a useful overview for an experiment with a two-group comparison. The log2 fold change for a particular comparison is plotted on the y-axis and the average of the counts normalized by size factor is shown on the x-axis (“M” for minus, because a log ratio is equal to log minus log, and “A” for average). plotMA(res, ylim=c(-5,5)) Each gene is represented with a dot. Genes with an adjusted $$p$$ value below a threshold (here 0.1, the default) are shown in red. The DESeq2 package incorporates a prior on log2 fold changes, resulting in moderated log2 fold changes from genes with low counts and highly variable counts, as can be seen by the narrowing of spread of points on the left side of the plot. This plot demonstrates that only genes with a large average normalized count contain sufficient information to yield a significant call. We can label individual points on the MA plot as well. Here we use the with() R function to plot a circle and text for a selected row of the results object. Within the with() function, only the baseMean and log2FoldChange values for the selected rows of res are used. plotMA(res, ylim=c(-5,5)) with(res[topGene, ], { points(baseMean, log2FoldChange, col="dodgerblue", cex=2, lwd=2) text(baseMean, log2FoldChange, topGene, pos=2, col="dodgerblue") }) Whether a gene is called significant depends not only on its LFC but also on its within-group variability, which DESeq2 quantifies as the dispersion. For strongly expressed genes, the dispersion can be understood as a squared coefficient of variation: a dispersion value of 0.01 means that the gene’s expression tends to differ by typically $$\sqrt{0.01} = 10\%$$ between samples of the same treatment group. For weak genes, the Poisson noise is an additional source of noise. The function plotDispEsts() visualizes DESeq2’s dispersion estimates: plotDispEsts(dds) The black points are the dispersion estimates for each gene as obtained by considering the information from each gene separately. Unless one has many samples, these values fluctuate strongly around their true values. Therefore, we fit the red trend line, which shows the dispersions’ dependence on the mean, and then shrink each gene’s estimate towards the red line to obtain the final estimates (blue points) that are then used in the hypothesis test. The blue circles above the main “cloud” of points are genes which have high gene-wise dispersion estimates which are labelled as dispersion outliers. These estimates are therefore not shrunk toward the fitted trend line. Another useful diagnostic plot is the histogram of the p values. hist(res$pvalue, breaks=20, col="grey50", border="white") This plot becomes a bit smoother by excluding genes with very small counts: hist(res$pvalue[res$baseMean > 1], breaks=20, col="grey50", border="white") 2.8 Independent filtering The MA plot highlights an important property of RNA-Seq data. For weakly expressed genes, we have no chance of seeing differential expression, because the low read counts suffer from so high Poisson noise that any biological effect is drowned in the uncertainties from the read counting. We can also show this by examining the ratio of small p values (say, less than, 0.01) for genes binned by mean normalized count: # create bins using the quantile function qs <- c(0, quantile(res$baseMean[res$baseMean > 0], 0:7/7)) # cut the genes into the bins bins <- cut(res$baseMean, qs) # rename the levels of the bins using the middle point levels(bins) <- paste0("~",round(.5*qs[-1] + .5*qs[-length(qs)])) # calculate the ratio of$p$values less than .01 for each bin ratios <- tapply(res$pvalue, bins, function(p) mean(p < .01, na.rm=TRUE)) # plot these ratios barplot(ratios, xlab="mean normalized count", ylab="ratio of small p values") At first sight, there may seem to be little benefit in filtering out these genes. After all, the test found them to be non-significant anyway. However, these genes have an influence on the multiple testing adjustment, whose performance improves if such genes are removed. By removing the weakly-expressed genes from the input to the FDR procedure, we can find more genes to be significant among those which we keep, and so improved the power of our test. This approach is known as independent filtering. The term independent highlights an important caveat. Such filtering is permissible only if the filter criterion is independent of the actual test statistic. Otherwise, the filtering would invalidate the test and consequently the assumptions of the BH procedure. This is why we filtered on the average over all samples: this filter is blind to the assignment of samples to the treatment and control group and hence independent. The independent filtering software used inside DESeq2 comes from the genefilter package, which contains a reference to a paper describing the statistical foundation for independent filtering. Our result table only uses Ensembl gene IDs, but gene names may be more informative. Bioconductor’s annotation packages help with mapping various ID schemes to each other. We load the AnnotationDbi package and the annotation package org.Hs.eg.db: library(org.Hs.eg.db) This is the organism annotation package (“org”) for Homo sapiens (“Hs”), organized as an AnnotationDbi database package (“db”), using Entrez Gene IDs (“eg”) as primary key. To get a list of all available key types, use: columns(org.Hs.eg.db) ## [1] "ACCNUM" "ALIAS" "ENSEMBL" "ENSEMBLPROT" "ENSEMBLTRANS" "ENTREZID" ## [7] "ENZYME" "EVIDENCE" "EVIDENCEALL" "GENENAME" "GO" "GOALL" ## [13] "IPI" "MAP" "OMIM" "ONTOLOGY" "ONTOLOGYALL" "PATH" ## [19] "PFAM" "PMID" "PROSITE" "REFSEQ" "SYMBOL" "UCSCKG" ## [25] "UNIGENE" "UNIPROT" res$hgnc_symbol <- unname(mapIds(org.Hs.eg.db, rownames(res), "SYMBOL", "ENSEMBL")) ## 'select()' returned 1:many mapping between keys and columns res$entrezgene <- unname(mapIds(org.Hs.eg.db, rownames(res), "ENTREZID", "ENSEMBL")) ## 'select()' returned 1:many mapping between keys and columns Now the results have the desired external gene ids: resOrdered <- res[order(res\$pvalue),] head(resOrdered) ## log2 fold change (MAP): dex trt vs untrt ## Wald test p-value: dex trt vs untrt ## DataFrame with 6 rows and 8 columns ## baseMean log2FoldChange lfcSE stat pvalue padj ## <numeric> <numeric> <numeric> <numeric> <numeric> <numeric> ## ENSG00000152583 997.4398 4.313961 0.1721375 25.06114 1.320082e-138 2.379844e-134 ## ENSG00000165995 495.0929 3.186823 0.1281565 24.86665 1.708459e-136 1.540005e-132 ## ENSG00000101347 12703.3871 3.618735 0.1489428 24.29613 2.153705e-130 1.294233e-126 ## ENSG00000120129 3409.0294 2.871488 0.1182491 24.28337 2.937761e-130 1.324049e-126 ## ENSG00000189221 2341.7673 3.230395 0.1366746 23.63567 1.657244e-123 5.975359e-120 ## ENSG00000211445 12285.6151 3.553360 0.1579821 22.49216 4.952520e-112 1.488067e-108 ## hgnc_symbol entrezgene ## <character> <character> ## ENSG00000152583 SPARCL1 8404 ## ENSG00000165995 CACNB2 783 ## ENSG00000101347 SAMHD1 25939 ## ENSG00000120129 DUSP1 1843 ## ENSG00000189221 MAOA 4128 ## ENSG00000211445 GPX3 2878 2.10 Exporting results You can easily save the results table in a CSV file, which you can then load with a spreadsheet program such as Excel. The call to as.data.frame is necessary to convert the DataFrame object (IRanges package) to a data.frame object which can be processed by write.csv. write.csv(as.data.frame(resOrdered), file="results.csv") 2.11 Session information As last part of this document, we call the function sessionInfo, which reports the version numbers of R and all the packages used in this session. It is good practice to always keep such a record as it will help to trace down what has happened in case that an R script ceases to work because the functions have been changed in a newer version of a package. The session information should also always be included in any emails to the Bioconductor support site along with all code used in the analysis. sessionInfo() ## R version 3.3.1 Patched (2016-10-12 r71512) ## Platform: x86_64-pc-linux-gnu (64-bit) ## Running under: Ubuntu 16.04.1 LTS ## ## locale: ## [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C LC_TIME=en_US.UTF-8 ## [4] LC_COLLATE=C LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 ## [10] LC_TELEPHONE=C LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C ## ## attached base packages: ## [1] stats4 parallel stats graphics grDevices utils datasets methods base ## ## other attached packages: ## [1] org.Hs.eg.db_3.4.0 AnnotationDbi_1.36.0 ## [3] RColorBrewer_1.1-2 ggplot2_2.2.0 ## [5] gplots_3.0.1 DESeq2_1.14.0 ## [7] VariantAnnotation_1.20.1 RNAseqData.HNRNPC.bam.chr14_0.12.0 ## [11] Rsamtools_1.26.1 BiocParallel_1.8.1 ## [13] rtracklayer_1.34.1 airway_0.108.0 ## [15] SummarizedExperiment_1.4.0 Biobase_2.34.0 ## [17] GenomicRanges_1.26.1 GenomeInfoDb_1.10.1 ## [19] Biostrings_2.42.0 XVector_0.14.0 ## [21] IRanges_2.8.1 S4Vectors_0.12.0 ## [23] BiocGenerics_0.20.0 BiocStyle_2.2.0 ## ## loaded via a namespace (and not attached): ## [1] splines_3.3.1 gtools_3.5.0 Formula_1.2-1 assertthat_0.1 ## [5] latticeExtra_0.6-28 BSgenome_1.42.0 yaml_2.1.14 RSQLite_1.0.0 ## [9] lattice_0.20-34 chron_2.3-47 digest_0.6.10 colorspace_1.3-0 ## [13] htmltools_0.3.5 Matrix_1.2-7.1 plyr_1.8.4 XML_3.98-1.5 ## [17] biomaRt_2.30.0 genefilter_1.56.0 zlibbioc_1.20.0 xtable_1.8-2 ## [21] scales_0.4.1 gdata_2.17.0 htmlTable_1.7 tibble_1.2 ## [25] annotate_1.52.0 GenomicFeatures_1.26.0 nnet_7.3-12 lazyeval_0.2.0 ## [29] survival_2.40-1 magrittr_1.5 evaluate_0.10 hwriter_1.3.2 ## [33] foreign_0.8-67 tools_3.3.1 data.table_1.9.6 matrixStats_0.51.0 ## [37] stringr_1.1.0 munsell_0.4.3 locfit_1.5-9.1 cluster_2.0.5 ## [41] caTools_1.17.1 grid_3.3.1 RCurl_1.95-4.8 bitops_1.0-6 ## [45] rmarkdown_1.1 gtable_0.2.0 codetools_0.2-15 DBI_0.5-1 ## [49] gridExtra_2.2.1 knitr_1.15 Hmisc_4.0-0 KernSmooth_2.23-15 ## [53] stringi_1.1.2 Rcpp_0.12.7 geneplotter_1.52.0 rpart_4.1-10 ## [57] acepack_1.4.1`
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4104156494140625, "perplexity": 16237.09615965567}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741764.35/warc/CC-MAIN-20181114082713-20181114104713-00224.warc.gz"}
https://support.bioconductor.org/p/85644/#85653
diffHiC differential HiC analysis starting from Ginteractions object 1 0 Entering edit mode Vivek.b ▴ 100 @vivekb-7661 Last seen 2.2 years ago Germany Hi I would like to use difHiC to get differential domains using two different cell lines. I have converted my normalized matrix for each of them into GInteraction objects. $s2 GInteractions object with 6398123 interactions and 1 metadata column: seqnames1 ranges1 seqnames2 ranges2 | norm.freq <Rle> <IRanges> <Rle> <IRanges> | <numeric> [1] chr2L [72559, 74473] --- chr2L [ 72559, 74473] | 1725.8969929 [2] chr2L [72559, 74473] --- chr2L [ 76649, 80158] | 4548.27044772 [3] chr2L [72559, 74473] --- chr2L [ 80158, 84182] | 4166.33078199 [4] chr2L [72559, 74473] --- chr2L [133166, 137699] | 956.928116437 [5] chr2L [72559, 74473] --- chr2L [205336, 207194] | 245.480797633 ... ... ... ... ... ... . ... [6398119] chrX [22215621, 22220243] --- chrX [22255907, 22256827] | 966.649143049 [6398120] chrX [22215621, 22220243] --- chrX [22401281, 22407854] | 499.251088874 [6398121] chrX [22255907, 22256827] --- chrX [22255907, 22256827] | 419.723838666 [6398122] chrX [22255907, 22256827] --- chrX [22401281, 22407854] | 653.823761634 [6398123] chrX [22401281, 22407854] --- chrX [22401281, 22407854] | 1729.85785275 ------- regions: 8456 ranges and 0 metadata columns seqinfo: 5 sequences from an unspecified genome; no seqlengths$c8 GInteractions object with 45016517 interactions and 1 metadata column: seqnames1 ranges1 seqnames2 ranges2 | norm.freq <Rle> <IRanges> <Rle> <IRanges> | <numeric> [1] chr2L [9879, 11901] --- chr2L [ 9879, 11901] | 20 [2] chr2L [9879, 11901] --- chr2L [11901, 13158] | 359 [3] chr2L [9879, 11901] --- chr2L [13158, 14087] | 196 [4] chr2L [9879, 11901] --- chr2L [14087, 14759] | 202 [5] chr2L [9879, 11901] --- chr2L [14759, 15546] | 20 ... ... ... ... ... ... . ... [45016513] chrYHet [184819, 190734] --- chrYHet [184819, 190734] | 2 [45016514] chrYHet [184819, 190734] --- chrYHet [198239, 215835] | 3 [45016515] chrYHet [198239, 215835] --- chrYHet [198239, 215835] | 59 [45016516] chrYHet [198239, 215835] --- chrYHet [333103, 338457] | 1 [45016517] chrYHet [333103, 338457] --- chrYHet [333103, 338457] | 4 ------- regions: 47740 ranges and 0 metadata columns seqinfo: 14 sequences from an unspecified genome; no seqlengths As you can see, the size of the two objects differ and the bin sizes are also not the same. Therefore I am getting error when trying to create an InteractionSet object out of them. I want to continue from step 5 of the diffHiC manual. Is it possible to proceed from here on ? Thanks, Vivek diffhic interactionset • 868 views 0 Entering edit mode Aaron Lun ★ 27k @alun Last seen 46 minutes ago The city by the bay No. There's at least three reasons that I can think of: • You have different bin pairs in the two objects. How are you going to match them up? What happens to a bin pair in one object that partially overlaps with multiple bin pairs in the other object - which of the overlapping bin pairs is the correct match? What do you do if you have a bin pair that is present in one object and absent in another - should you set the intensity for the latter to zero? (This won't be correct if the entries of your objects represent called interactions, such that missingness does not imply zero.) • Your normalized interaction frequencies are not counts. edgeR needs counts. • You don't seem to have any replicates. edgeR needs replicates. Also, with regards to the first point, your objects have very different numbers of regions and sequences. This should not occur if you generated them directly from contact matrices of the same genome. 0 Entering edit mode Hi Aaron Thanks for the reply. Using restriction frag size produced different bin sizes (since sometime i get a cut, sometimes not. plus they are different cell lines). But I now overcome this by using fixed bin size to create the matrix (not restriction frag length). I can overcome #3 since I have two replicates for each cell line. For #2, I am using ICE normalized counts at this moment, although I have RAW counts also for this data. I can also floor the normalized counts to integers, but I don't know what you recommend. I thought the counts for balanced matrices would be better. My new GInteraction objects (only pasting half, due to char limit) : $s2_2 GInteractions object with 293659 interactions and 1 metadata column: seqnames1 ranges1 seqnames2 ranges2 | norm.freq <Rle> <IRanges> <Rle> <IRanges> | <numeric> [1] chr2L [5000, 10000] --- chr2L [ 5000, 10000] | 23814.2750234 [2] chr2L [5000, 10000] --- chr2L [10000, 15000] | 8456.43596258 [3] chr2L [5000, 10000] --- chr2L [25000, 30000] | 1276.57303962 [4] chr2L [5000, 10000] --- chr2L [60000, 65000] | 858.910217701 [5] chr2L [5000, 10000] --- chr2L [65000, 70000] | 991.555775811 ... ... ... ... ... ... . ... [293655] chrX [22410000, 22415000] --- chrX [22415000, 22420000] | 6260.66759762 [293656] chrX [22410000, 22415000] --- chrX [22420000, 22422827] | 3311.00613075 [293657] chrX [22415000, 22420000] --- chrX [22415000, 22420000] | 65142.0495447 [293658] chrX [22415000, 22420000] --- chrX [22420000, 22422827] | 2274.53704182 [293659] chrX [22420000, 22422827] --- chrX [22420000, 22422827] | 49047.4898882 ------- regions: 2250 ranges and 0 metadata columns seqinfo: 5 sequences from an unspecified genome; no seqlengths$c8_1 GInteractions object with 17320314 interactions and 1 metadata column: seqnames1 ranges1 seqnames2 ranges2 | norm.freq <Rle> <IRanges> <Rle> <IRanges> | <numeric> [1] chr2L [5000, 10000] --- chr2L [ 5000, 10000] | 84.5360595733 [2] chr2L [5000, 10000] --- chr2L [15000, 20000] | 177.442529967 [3] chr2L [5000, 10000] --- chr2L [20000, 25000] | 62.7259423407 [4] chr2L [5000, 10000] --- chr2L [25000, 30000] | 54.1675658477 [5] chr2L [5000, 10000] --- chr2L [30000, 35000] | 35.1780907057 ... ... ... ... ... ... . ... [17320310] chrX [22405000, 22410000] --- chrX [22420000, 22422827] | 42.9441380746 [17320311] chrX [22410000, 22415000] --- chrX [22410000, 22415000] | 228.990371705 [17320312] chrX [22410000, 22415000] --- chrX [22420000, 22422827] | 89.3560095486 [17320313] chrX [22415000, 22420000] --- chrX [22415000, 22420000] | 140.273595774 [17320314] chrX [22420000, 22422827] --- chrX [22420000, 22422827] | 280.963274035 ------- regions: 23502 ranges and 0 metadata columns seqinfo: 5 sequences from an unspecified genome; no seqlengths 1 Entering edit mode Okay, let's say we have two GInteractions objects. To "merge" them, I would first create a common reference: combined <- unique(c(gi1, gi2)) I would then standardize the regions in all the individual objects to this reference: replaceRegions(gi1) <- regions(combined) replaceRegions(gi2) <- regions(combined) I would match the entries of the individual objects to the reference object: m1 <- match(gi1, combined) m2 <- match(gi2, combined) Then use this to generate my count matrix (assuming unobserved interactions have counts of zero): counts <- matrix(0, ncol=2, nrow=length(combined)) counts[m1,1] <- gi1$counts counts[m2,2] <- gi2$counts From this point, it is simple to create an InteractionSet object: iset <- InteractionSet(counts, combined) 0 Entering edit mode Thanks Aaron. This works.. 0 Entering edit mode Why do you have different fragment sizes between samples? If they came from the same genome with the same restriction enzyme, then the coordinates of each bin should be the same between samples. P.S. Use the raw counts. Giving normalized values to edgeR is, in general, a Bad Thing.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39344653487205505, "perplexity": 13431.718975795597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571996.63/warc/CC-MAIN-20220814052950-20220814082950-00532.warc.gz"}
http://mathoverflow.net/questions/66956/expressing-as-a-boolean-formula
# Expressing >= as a boolean formula. Given a bunch of boolean variables $a_i \in \{0, 1\}$, I want to write a boolean formula to express $\sum_{i=1}^n a_i \geq k$. i.e. I'm allowed to use $(, ), \wedge, \vee, \lnot$. Now, if I allow the use of $\exist$, then I can do this as a formular of length O(n^c) (basically create a circuit that adds together the $a_i$ and does a comparison against $k$, then use the existential quantifier to create variables representing the intermediate nodes of the circuit). However, if I am not allowed to use the existential quantifier, and I can not create intermediate nodes, can I do this in a formula of sub exponential length? Thanks! - I think you'll have better luck with this question at stackexchange. –  Kevin Buzzard Jun 5 '11 at 11:04 Yes, there exist (uniformly constructible) polynomial-size Boolean formulas for threshold functions (which is how your functions are called). Equivalently, there are polynomial-size formulas for summing $n$ binary numbers of length $m$. Also equivalently, the complexity class (uniform) $\mathrm{TC}^0$ is contained in (uniform) $\mathrm{NC}^1$. The easy way to do it is to use the so-called carry-save addition. This is a recursive construction whose basic step is a reduction of the computation of a sum of $3$ numbers $a,b,c$ to a sum of $2$ numbers $d,e$ using a linear-size constant-depth fan-in $2$ circuit (or formula): $d$ consists of bitwise sums of the inputs modulo $2$ disregarding any carries (i.e., the $i$th bit $d_i$ is $a_i\oplus b_i\oplus c_i$, where $\oplus$ denotes the parity function), whereas $e$ is the carry vector ($e_{i+1}=1$ iff $a_i+b_i+c_i\ge2$). By taking $n/3$ of these basic blocks in parallel, we can reduce a sum of $n$ numbers to a sum of $2n/3$ numbers by a constant-depth circuit, and by repeating this step $\log_{3/2}n$ times, we can sum $n$ numbers by a circuit of depth $O(\log n)$. Since a circuit of depth $d$ and fan-in $2$ with $k$ output bits can be expanded to a formula of size $k2^d$, this gives formulas of size $n^{O(1)}$. In fact, threshold functions are also computable by polynomial-size log-depth monotone formulas (i.e., using only $\land$ and $\lor$, but not $\neg$), but this is harder to prove. A simple but nonconstructive probabilistic proof of the existence of such formulas was given by Valiant (Short monotone formulae for the majority function, J. of Algorithms 5 (1983), #3, 363–366). A constructive but very complicated construction follows from the construction of log-depth sorting networks by Ajtai, Komlós and Szemerédi (An $O(n\log n)$ sorting network, Proc. 15th STOC, 1983, 1–9).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9579508304595947, "perplexity": 436.14596609539393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345776257/warc/CC-MAIN-20131218054936-00005-ip-10-33-133-15.ec2.internal.warc.gz"}
https://altexploit.wordpress.com/2017/07/05/grothendiecks-abstract-homotopy-theory/
# Grothendieck’s Abstract Homotopy Theory Let E be a Grothendieck topos (think of E as the category, Sh(X), of set valued sheaves on a space X). Within E, we can pick out a subcategory, C, of locally finite, locally constant objects in E. (If X is a space with E = Sh(X), C corresponds to those sheaves whose espace étale is a finite covering space of X.) Picking a base point in X generalises to picking a ‘fibre functor’ F : C → Setsfin, a functor satisfying various conditions implying that it is pro-representable. (If x0 ∈ X is a base point {x0} → X induces a ‘fibre functor’ Sh(X) → Sh{x0} ≅ Sets, by pullback.) If F is pro-representable by P, then π1(E, F) is defined to be Aut(P), which is a profinite group. Grothendieck proves there is an equivalence of categories C ≃ π1(E) − Setsfin, the category of finite π1(E)-sets. If X is a locally nicely behaved space such as a CW-complex and E = Sh(X), then π1(E) is the profinite completion of π1(X). This profinite completion occurs only because Grothendieck considers locally finite objects. Without this restriction, a covering space Y of X would correspond to a π1(X) – set, Y′, but if Y is a finite covering of X then the homomorphism from π1(X) to the finite group of transformations of Y factors through the profinite completion of π1(X). This is defined by : if G is a group, Gˆ = lim(G/H : H ◅ G, H of finite index) is its profinite completion. This idea of using covering spaces or their analogue in E raises several important points: a) These are homotopy theoretic results, but no paths are used. The argument involving sheaf theory, the theory of (pro)representable functors, etc., is of a purely categorical nature. This means it is applicable to spaces where the use of paths, and other homotopies is impossible because of bad (or unknown) local properties. Such spaces have been studied within Shape Theory and Strong Shape Theory, although not by using Grothendieck’s fundamental group, nor using sheaf theory. b) As no paths are used, these methods can also be applied to non-spaces, e.g. locales and possibly to their non-commutative analogues, quantales. For instance, classically one could consider a field k and an algebraic closure K of k and then choose C to be a category of étale algebras over k, in such a way that π1(E) ≅ Gal(K/k), the Galois group of k. It, in fact, leads to a classification theorem for Grothendieck toposes. From this viewpoint, low dimensional homotopy theory is ssen as being part of Galois theory, or vice versa. c) This underlines the fact that π1(X) classifies covering spaces – but for i > 1, πi(X) does not seem to classify anything other than maps from Si into X! This is abstract homotopy theory par excellence.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9486769437789917, "perplexity": 1087.8914268232777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950363.89/warc/CC-MAIN-20230401221921-20230402011921-00422.warc.gz"}
https://en.wikipedia.org/wiki/Dilatant
# Dilatant A dilatant (/dˈltənt/, /dɪ-/) (also termed shear thickening) material is one in which viscosity increases with the rate of shear strain. Such a shear thickening fluid, also known by the initialism STF, is an example of a non-Newtonian fluid. This behaviour is usually not observed in pure materials, but can occur in suspensions. A dilatant is a non-Newtonian fluid where the shear viscosity increases with applied shear stress. This behavior is only one type of deviation from Newton’s Law, and it is controlled by such factors as particle size, shape, and distribution. The properties of these suspensions depend on Hamaker theory and Van der Waals forces and can be stabilized electrostatically or sterically. Shear thickening behavior occurs when a colloidal suspension transitions from a stable state to a state of flocculation. A large portion of the properties of these systems are due to the surface chemistry of particles in dispersion, known as colloids. This can readily be seen with a mixture of cornstarch and water[1] (sometimes called oobleck), which acts in counterintuitive ways when struck or thrown against a surface. Sand that is completely soaked with water also behaves as a dilatant material. This is the reason why when walking on wet sand, a dry area appears directly underfoot.[2] Rheopecty is a similar property in which viscosity increases with cumulative stress or agitation over time. The opposite of a dilatant material is a pseudoplastic. ## Definitions There are two types of deviation from Newton's law that are observed in real systems. The most common deviation is shear thinning behavior, where the viscosity of the system decreases as the shear rate is increased. The second deviation is shear thickening behavior where, as the shear rate is increased, the viscosity of the system also increases. This behavior is observed because the system crystallizes under stress and behaves more like a solid than a solution.[3] Thus, the viscosity of a shear-thickening fluid is dependent on the shear rate. The presence of suspended particles often affects the viscosity of a solution. In fact, with the right particles, even a Newtonian fluid can exhibit non-Newtonian behavior. An example of this is cornstarch in water and is included in the Examples section below. The parameters that control shear thickening behavior are: particle size and particle size distribution, particle volume fraction, particle shape, particle-particle interaction, continuous phase viscosity, and the type, rate, and time of deformation. In addition to these parameters, all shear thickening fluids are stabilized suspensions and have a volume fraction of solid that is relatively high.[4] Viscosity of a solution as a function of shear rate is given by the power-law equation,[5] ${\displaystyle \eta =K{\dot {\gamma }}^{n-1},}$ where η is the viscosity, K is a material-based constant, and γ̇ is the applied shear rate. Dilatant behavior occurs when n is greater than 1. Below is a table of viscosity values for some common materials.[6][7][8] Material Viscosity (cP) Benzene 0.60 Carbon tetrachloride 0.88 Ethanol 1.06 Water 1–5 Mercury 1.55 Pentane 2.24 Blood 10 Antifreeze 14 Sulfuric acid 27 Maple syrup 150–200 Honey 2,000–3,000 Chocolate syrup 10,000–25,000 Ketchup 50,000–70,000 Peanut butter 150,000–250,000 ### Stabilized suspensions A suspension is composed of a fine, particulate phase dispersed throughout a differing, heterogeneous phase. Shear-thickening behavior is observed in systems with a solid, particulate phase dispersed within a liquid phase. These solutions are different from a Colloid in that they are unstable; the solid particles in dispersion are sufficiently large for sedimentation, causing them to eventually settle. Whereas the solids dispersed within a colloid are smaller and will not settle. There are multiple methods for stabilizing suspensions, including electrostatics and sterics. Energy of repulsion as a function of particle separation In an unstable suspension, the dispersed, particulate phase will come out of solution in response to forces acting upon the particles, such as gravity or Hamaker attraction. The magnitude of the effect these forces have on pulling the particulate phase out of solution is proportional to the size of the particulates; for a large particulate, the gravitational forces are greater than the particle-particle interactions, whereas the opposite is true for small particulates. Shear thickening behavior is typically observed in suspensions of small, solid particulates, indicating that the particle-particle Hamaker attraction is the dominant force. Therefore, stabilizing a suspension is dependent upon introducing a counteractive repulsive force. Hamaker theory describes the attraction between bodies, such as particulates. It was realized that the explanation of Van der Waals forces could be upscaled from explaining the interaction between two molecules with induced dipoles to macro-scale bodies by summing all the intermolecular forces between the bodies. Similar to Van der Waals forces, Hamaker theory describes the magnitude of the particle-particle interaction as inversely proportional to the square of the distance. Therefore, many stabilized suspensions incorporate a long-range repulsive force that is dominant over Hamaker attraction when the interacting bodies are at a sufficient distance, effectively preventing the bodies from approaching one another. However, at short distances, the Hamaker attraction dominates, causing the particulates to coagulate and fall out of solution. Two common long-range forces used in stabilizing suspensions are electrostatics and sterics. #### Electrostatically stabilized suspensions Particle in solution stabilized via the electrostatic double-layer force Suspensions of similarly charged particles dispersed in a liquid electrolyte are stabilized through an effect described by the Helmholtz double layer model. The model has two layers. The first layer is the charged surface of the particle, which creates an electrostatic field that affects the ions in the electrolyte. In response, the ions create a diffuse layer of equal and opposite charge, effectively rendering the surface charge neutral. However, the diffuse layer creates a potential surrounding the particle that differs from the bulk electrolyte. The diffuse layer serves as the long-range force for stabilization of the particles. When particles near one another, the diffuse layer of one particle overlaps with that of the other particle, generating a repulsive force. The following equation provides the energy between two colloids as a result of the Hamaker interactions and electrostatic repulsion. ${\displaystyle V=\pi R\left({\frac {-H}{12\pi h^{2}}}+{\frac {64CkT\Gamma ^{2}e^{\kappa }h}{\kappa ^{2}}}\right),}$ where: V = energy between a pair of colloids, R = radius of colloids, H = Hamaker constant between colloid and solvent, h = distance between colloids, C = surface ion concentration, k = Boltzmann constant, T = temperature in kelvins, ${\displaystyle \Gamma }$ = surface excess, ${\displaystyle \kappa }$ = inverse Debye length. #### Sterically stabilized suspensions Particle in suspension stabilized via steric hindrance. Different from electrostatics, sterically stabilized suspensions rely on the physical interaction of polymer chains attached to the surface of the particles to keep the suspension stabilized; the adsorbed polymer chains act as a spacer to keep the suspended particles separated at a sufficient distance to prevent the Hamaker attraction from dominating and pulling the particles out of suspension. The polymers are typically either grafted or adsorbed onto the surface of the particle. With grafted polymers, the backbone of the polymer chain is covalently bonded to the particle surface. Whereas an adsorbed polymer is a copolymer composed of lyophobic and lyophilic region, where the lyophobic region non-covalently adheres to the particle surface and the lyophilic region forms the steric boundary or spacer. ## Theories behind shear thickening behavior Dilatancy in a colloid, or its ability to order in the presence of shear forces, is dependent on the ratio of interparticle forces. As long as interparticle forces such as Van der Waals forces dominate, the suspended particles remain in ordered layers. However, once shear forces dominate, particles enter a state of flocculation and are no longer held in suspension; they begin to behave like a solid. When the shear forces are removed, the particles spread apart and once again form a stable suspension. Shear thickening behavior is highly dependent upon the volume fraction of solid particulate suspended within the liquid. The higher the volume fraction, the less shear required to initiate the shear thickening behavior. The shear rate at which the fluid transitions from a Newtonian flow to a shear thickening behavior is known as the critical shear rate. ### Order to disorder transition When shearing a concentrated stabilized solution at a relatively low shear rate, the repulsive particle-particle interactions keep the particles in an ordered, layered, equilibrium structure. However, at shear rates elevated above the critical shear rate, the shear forces pushing the particles together overcome the repulsive particle-particle interactions, forcing the particles out of their equilibrium positions. This leads to a disordered structure, causing an increase in viscosity.[9] The critical shear rate here is defined as the shear rate at which the shear forces pushing the particles together are equivalent to the repulsive particle interactions. ### Hydroclustering Transient hydroclustering of particles in a solution. When the particles of a stabilized suspension transition from an immobile state to mobile state, small groupings of particles form hydroclusters, increasing the viscosity. These hydroclusters are composed of particles momentarily compressed together, forming an irregular, rod-like chain of particles akin to a logjam or traffic jam. In theory the particles have extremely small interparticle gaps, rendering this momentary, transient hydrocluster as incompressible. It is possible that additional hydroclusters will form through aggregation.[10] ## Examples ### Corn starch and water (oobleck) Cornstarch is a common thickening agent used in cooking. It is also a very good example of a shear-thickening system. When a force is applied to a 1:1.25 mixture of water and cornstarch, the mixture acts as a solid and resists the force. ### Silica and polyethylene glycol Silica nano-particles are dispersed in a solution of polyethylene glycol. The silica particles provide a high-strength material when flocculation occurs. This allows it to be used in applications such as liquid body armor and brake pads. ## Applications ### Traction control Dilatant materials have certain industrial uses due to their shear-thickening behavior. For example, some all-wheel drive systems use a viscous coupling unit full of dilatant fluid to provide power transfer between front and rear wheels. On high-traction road surfacing, the relative motion between primary and secondary drive wheels is the same, so the shear is low and little power is transferred. When the primary drive wheels start to slip, the shear increases, causing the fluid to thicken. As the fluid thickens, the torque transferred to the secondary drive wheels increases proportionally, until the maximum amount of power possible in the fully thickened state is transferred. (See also limited-slip differential, some types of which operate on the same principle.) To the operator, this system is entirely passive, engaging all four wheels to drive when needed and dropping back to two wheel drive once the need has passed. This system is generally used for on-road vehicles rather than off-road vehicles, since the maximum viscosity of the dilatant fluid limits the amount of torque that can be passed across the coupling. ### Body armor Various corporate and government entities are researching the application of shear-thickening fluids for use as body armor. Such a system could allow the wearer flexibility for a normal range of movement, yet provide rigidity to resist piercing by bullets, stabbing knife blows, and similar attacks. The principle is similar to that of mail armor, though body armor using a dilatant would be much lighter. The dilatant fluid would disperse the force of a sudden blow over a wider area of the user's body, reducing the blunt force trauma. However, the dilatant would not provide any additional protection against slow attacks, such as a slow but forceful stab, which would allow flow to occur.[11] In one study, standard Kevlar fabric was compared to a composite armor of Kevlar and a proprietary shear-thickening fluid. The results showed that the Kevlar/fluid combination performed better than the pure-Kevlar material, despite having less than one-third the Kevlar thickness.[11] Four examples of dilatant materials being used in personal protective equipment are Armourgel, D3O, ArtiLage (Artificial Cartilage foam) and "Active Protection System" manufactured by Dow Corning.[12] In 2002, researchers at the U.S. Army Research Laboratory and University of Delaware began researching the use of liquid armor, or a shear-thickening fluid in body armor. Researchers demonstrated that high-strength fabrics such as Kevlar can be made more bulletproof and stab-resistant when impregnated with the fluid.[13][14] The goal of the “liquid armor” technology is to create a new material that is low-cost and lightweight while still offering equivalent or superior ballistic properties compared to current Kevlar fabric.[15] For their work on liquid armor, Dr. Eric Wetzel, an ARL mechanical engineer, and his team were awarded the 2002 Paul A. Siple Award, the Army’s highest award for scientific achievement, at the Army Science Conference.[16] The company D3O invented a non-Newtonian–based material that has seen wide adaptation across a broad range of standard and custom applications, including motorcycle and extreme-sports protective gear, industrial work wear, military applications, and impact protection for electronics. The materials allow flexibility during normal wear but become stiff and protective when strongly impacted. While some products are marketed directly, much of their manufacturing capability goes to selling and license the material to other companies for use in their own lines of protective products. ## References 1. ^ 2. ^ 3. ^ Coleman, Paul C. Painter, Michael M. (1997). Fundamentals of polymer science : an introductory text (2nd ed.). Lancaster, Pa.: Technomic. pp. 412–413. ISBN 978-1-56676-559-6. 4. ^ Galindo-Rosales, Francisco J.; Rubio-Hernández, Francisco J.; Velázquez-Navarro, José F. (22 May 2009). "Shear-thickening behavior of Aerosil® R816 nanoparticles suspensions in polar organic liquids". Rheologica Acta. 48 (6): 699–708. Bibcode:1974AcRhe..13.1253J. doi:10.1007/s00397-009-0367-7. S2CID 98809104. 5. ^ Cunningham, Neil. "Rheology School". Brookfield Engineering. Archived from the original on 25 July 2011. Retrieved 4 June 2011. 6. ^ Barnes, H. A.; Hutton, J. F.; Walters, K. (1989). An introduction to rheology (5. impr. ed.). Amsterdam: Elsevier. ISBN 978-0-444-87140-4. 7. ^ Atkins, Peter (2010). Physical chemistry (9th ed.). New York: W. H. Freeman and Co. ISBN 978-1-4292-1812-2. 8. ^ "Viscosity Chart". Research Equipment Limited. Retrieved 4 June 2011. 9. ^ Boersma, Willem H; Jozua Laven; Hans N Stein (1990). "Shear Thickening (Dilatancy) in Concentrated Dispersions". AIChE Journal (Submitted manuscript). 36 (3): 321–332. doi:10.1002/aic.690360302. 10. ^ Farr, R. S.; et al. (June 1997). "Kinetic theory of jamming in hard-sphere startup flows". Physical Review E. 55 (6): 7206–7211. Bibcode:1997PhRvE..55.7203F. doi:10.1103/physreve.55.7203. 11. ^ a b Gill, Victoria (2010-07-09). "Liquid armour 'can stop bullets'". BBC News. 12. ^ [1] Archived June 3, 2010, at the Wayback Machine 13. ^ "A Call to Armor: Army Explores Stronger, Lighter, Cheaper Protection". Association of the United States Army. 2016-05-20. Retrieved 2018-07-11. 14. ^ "Liquid Armor: University of Delaware's innovation". Body Armor News. 2015-03-10. Retrieved 2018-07-11. 15. ^ "How the U.S. Army Uses Liquid Body Armor". The Balance Careers. Retrieved 2018-07-11. 16. ^ "Army Scientists, Engineers Develop Liquid Body Armor". CorrectionsOne. Retrieved 2018-07-11.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7980378270149231, "perplexity": 3076.8234361254076}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570901.18/warc/CC-MAIN-20220809033952-20220809063952-00117.warc.gz"}
http://math.stackexchange.com/users/40053/pacha?tab=summary
Pacha Reputation Top tag Next privilege 250 Rep. 1 1 9 Impact ~36k people reached • 0 posts edited ### Questions (7) 4 A simple limit problem 3 X and Y coordinates of circle giving a center, radius and angle 0 Simple inverse using Laplace transform 0 Simple inverse laplace transform problem 0 Isolate y in this equation: $y ^ 2 + y = x$ ### Reputation (151) This user has no recent positive reputation changes This user has not answered any questions ### Tags (8) 0 circle × 2 0 differential-equations 0 laplace-transform × 2 0 functions 0 limits × 2 0 algebra-precalculus 0 trigonometry 0 calculus ### Accounts (14) Stack Overflow 638 rep 11228 Game Development 259 rep 214 Blender 177 rep 6 Music: Practice & Theory 175 rep 16 Super User 161 rep 10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6032801866531372, "perplexity": 9433.615451842641}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051177779.21/warc/CC-MAIN-20160524005257-00026-ip-10-185-217-139.ec2.internal.warc.gz"}
https://blog.slcg.com/2010/05/finra-press-release-auction-rate.html
## Monday, May 24, 2010 ### FINRA Press Release: Auction Rate Securities FINRA Fines Nuveen $3 Million for Use of Misleading Marketing Materials Concerning Auction Rate Securities The Financial Industry Regulatory Authority (FINRA) issued a press release this week announcing that it has fined Nuveen Investments, LLC, of Chicago,$3 million for creating misleading marketing materials used in sales of auction rate preferred securities (ARPS). The Nuveen Funds' ARPS were a form of auction rate securities, which are long-term securities with interest rates or dividend yields that are reset periodically through an auction process. In contrast to other types of auction rate securities, the Nuveen ARPS were preferred shares issued by closed end mutual funds to raise money for the funds to use to invest. The settlement is detailed in the FINRA AWC No. 2008013056701. Auction rate securities (ARS) were first issued in the mid-1980s by corporations. The market for ARS grew rapidly over the next two decades and widely issued by a diverse range of institutions such as closed-end mutual funds, municipalities and student loan trusts. ARS were long-term floating rate securities whose coupon payments were determined at auctions that were typically held every 7 to 35 days, making ARS long-term securities with short-term floating rates. Broker dealers marketed ARS as liquid, short-term cash equivalents. However, ARS auctions failed en masse in February 2008 and proved to be illiquid and unsellable in the short-term. SLCG has written papers on the ARS that describes the ARS, what they are, how their auctions worked, and why they failed. SLCG was also recently hired by the State of North Carolina to advise on the liquidity solutions to ARS investors who have yet been able to redeem these illiquid securities. Investors can use our dedicated website for other in-depth analyses of security products.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27630752325057983, "perplexity": 10772.481767841213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896932.38/warc/CC-MAIN-20200708093606-20200708123606-00409.warc.gz"}
http://lists.gnu.org/archive/html/lilypond-devel/2009-09/msg00084.html
lilypond-devel [Top][All Lists] ## Re: Contemporary music documentation From: Carl Sorensen Subject: Re: Contemporary music documentation Date: Sat, 5 Sep 2009 20:46:42 -0600 On 9/5/09 7:12 PM, "Graham Percival" <address@hidden> wrote: > On Sat, Sep 05, 2009 at 06:57:59AM -0600, Carl Sorensen wrote: >> >> On 9/4/09 6:03 PM, "Graham Percival" <address@hidden> wrote: >> >>> \\\\override editorial.itely expressive.itely pitches.itely >>> repeats.itely rhythms.itely simultaneous.itely staff.itely >>> text.itely | wc >>> 72 465 4521 >> >> I think your grep is mistaken. The autobeaming stuff isn't \override, but >> \overrideAutoBeamSettings (in 2.12) and \overrideBeamSettings (in 2.13.4). > > Well, those are still tweaks, right? And \\\\override matches > anything that starts "\overide". > >> When GLISS comes along, I think that name will have to go. Just like you >> don't want \setFoo instead of \set Context.foo, we don't want >> \overridePropertySetting #value instead of \override property = #value. > > Yes. > >> But I believe that *all* tweaks have been removed from the autobeaming >> documentation. > > Hmm. This could be a meaningless semantic quibble, or it could be > something that's fundamental to the docs, GLISS, and development > in general. Is a change to the autobeaming, done via > \overrideAutoBeamSettings, consititude a "tweak"? Offhand, I'd > say "yes". No, I don't think they are tweaks. That is a defined command to achieve a particular behavior. In my opinion, a call to \overrideBeamSettings is fundamentally equivalent to a call to \hideNotes. It's specific defined LilyPond syntax. On the other hand while \override and \set are LilyPond syntax, they are so general that virtually anything can be done with them. In my mind, that's why we don't want them in the text body; we can't reasonably maintain totally flexible syntax. > > Beside that point, look at NR 1.6.2. Now, Staff symbol is a > disaster, but Ossia? It would a very different doc section if we > had no tweaks in there. > (that said, this is probably a great example of a place where we > should add predefined commands... could we change: > \new Staff \with { > \remove "Time_signature_engraver" > alignAboveContext = #"main" > fontSize = #-3 > \override StaffSymbol #'staff-space = #(magstep -3) > \override StaffSymbol #'thickness = #(magstep -3) > firstClef = ##f > } > to > \new Staff \with { > \ossiaSize > } > ? Probably not, which is why people were talking about making a > \new SmallStaff context. > > There's also 1.7.1 Selecting notation font size. I think the > \set fontSize = #3 commands aren't a bad thing in this context. > > > Such items are very much the exception, but I think a few > exceptions are useful. > I agree that a few exceptions are useful, but the more we can move toward predefineds for commonly used functions, the better off we are. > >> At any rate, I don't think we should be *adding* new sections with tweaks in >> the main text to NR1+2. > > Hmm. In this case, I'm with Trevor -- Joseph is all fired up
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8326736092567444, "perplexity": 21300.64911192511}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540975.18/warc/CC-MAIN-20161202170900-00143-ip-10-31-129-80.ec2.internal.warc.gz"}
https://www.maplesoft.com/support/help/Maple/view.aspx?path=plot3d%2Fcoords
Set Coordinate System for 3-D Plots - Maple Programming Help Home : Support : Online Help : Graphics : 3-D : Options : plot3d/coords Set Coordinate System for 3-D Plots Description • The default coordinate system for all three dimensional plotting commands is the Cartesian coordinate system.  The coords option allows the user to alter this coordinate system.  The alternate choices are: bipolarcylindrical, bispherical, cardioidal, cardioidcylindrical, casscylindrical, confocalellip, confocalparab, conical, cylindrical, ellcylindrical, ellipsoidal, hypercylindrical, invcasscylindrical, invellcylindrical, invoblspheroidal, invprospheroidal, logcoshcylindrical, logcylindrical, maxwellcylindrical, oblatespheroidal, paraboloidal, paracylindrical, prolatespheroidal, rosecylindrical, sixsphere, spherical, tangentcylindrical, tangentsphere, and toroidal. • For a description of each of the above coordinate systems, see the coords help page. • When using Cartesian coordinates, z, the vertical coordinate, is expressed as a function of x and y: $\mathrm{plot3d}\left(z\left(x,y\right),x=a..b,y=c..d\right)$. • For alternate coordinate systems this is interpreted differently. For example, when using cylindrical coordinates, Maple expects the command to be of the following form: $\mathrm{plot3d}\left(r\left(\mathrm{\theta },z\right),\mathrm{\theta }=a..b,z=c..d,\mathrm{coords}=\mathrm{cylindrical}\right)$. r, the distance to the projection of the point in the x-y plane from the origin, is a function of theta, the counterclockwise angle from the positive x-axis, and of z, the height above the x-y plane. For spherical coordinates the interpretation is: $\mathrm{plot3d}\left(r\left(\mathrm{\theta },\mathrm{\phi }\right),\mathrm{\theta }=a..b,\mathrm{\phi }=c..d,\mathrm{coords}=\mathrm{spherical}\right)$. where theta is the counterclockwise angle measured from the x-axis in the x-y plane.  phi is the angle measured from the positive z-axis, or the colatitude.  These angles determine the direction from the origin while the distance from the origin, r, is a function of phi and theta. Other coordinate systems have similar interpretations. • The conversions from the various coordinate systems to Cartesian coordinates can be found in coords. • All coordinate systems are also valid for parametrically defined 3-D plots with the same interpretations of the coordinate system transformations. Examples > $\mathrm{plot3d}\left(\mathrm{sin}\left(x\right)+\mathrm{sin}\left(y\right),x=0..2\mathrm{\pi },y=0..2\mathrm{\pi },\mathrm{axes}=\mathrm{boxed}\right)$ > $\mathrm{plot3d}\left(\mathrm{height},\mathrm{angle}=0..2\mathrm{\pi },\mathrm{height}=-5..5,\mathrm{coords}=\mathrm{cylindrical},\mathrm{title}=\mathrm{CONE}\right)$ > $\mathrm{plot3d}\left(1,t=0..2\mathrm{\pi },p=0..\mathrm{\pi },\mathrm{coords}=\mathrm{spherical},\mathrm{scaling}=\mathrm{constrained}\right)$ > $\mathrm{plot3d}\left(\mathrm{\theta },\mathrm{\theta }=0..8\mathrm{\pi },z=-1..1,\mathrm{coords}=\mathrm{cylindrical}\right)$ > $\mathrm{plot3d}\left(\mathrm{\theta },\mathrm{\theta }=0..8\mathrm{\pi },\mathrm{\phi }=0..\mathrm{\pi },\mathrm{coords}=\mathrm{spherical},\mathrm{style}=\mathrm{wireframe}\right)$ > $\mathrm{plot3d}\left(\mathrm{\theta },\mathrm{\theta }=0..8\mathrm{\pi },\mathrm{\phi }=0..\mathrm{\pi },\mathrm{coords}=\mathrm{toroidal}\left(2\right),\mathrm{style}=\mathrm{wireframe}\right)$ Define a new cylindrical system so $z=z\left(r,\mathrm{\theta }\right)$ instead of $r=r\left(\mathrm{\theta },z\right)$: > $\mathrm{addcoords}\left(\mathrm{z_cylindrical},\left[z,r,\mathrm{\theta }\right],\left[r\mathrm{cos}\left(\mathrm{\theta }\right),r\mathrm{sin}\left(\mathrm{\theta }\right),z\right]\right)$ > $\mathrm{plot3d}\left(r\mathrm{cos}\left(\mathrm{\theta }\right),r=0..10,\mathrm{\theta }=0..2\mathrm{\pi },\mathrm{coords}=\mathrm{z_cylindrical},\mathrm{title}=\mathrm{z_cylindrical},\mathrm{orientation}=\left[-132,71\right],\mathrm{axes}=\mathrm{boxed}\right)$ The command to create the plot from the Plotting Guide is > $\mathrm{plot3d}\left(r\mathrm{cos}\left(\mathrm{\theta }\right),r=0..10,\mathrm{\theta }=0..2\mathrm{\pi },\mathrm{coords}=\mathrm{cylindrical},\mathrm{orientation}=\left[100,71\right]\right)$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 14, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9874649047851562, "perplexity": 1276.0390412551883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357929.4/warc/CC-MAIN-20210226145416-20210226175416-00429.warc.gz"}
https://gamedev.stackexchange.com/questions/184345/how-to-specify-font-size-in-nifty-gui-when-using-ttf-fonts
# How to specify font size in nifty GUI when using ttf fonts I'm trying to use a font with a larger size in a ttf file using nifty GUI. All questions and answers I've seen are concerned with making fonts in fnt fonts in a different size, but the size I want to use is actually included in the ttf file. I tried setting fontSize and size attributes with no effect (no errors but the font size does not change either) Below is the xml excerpt where I'm trying to use the ttf font with a large size: <control id="score" name="label" color="#fff" text="0" width="50%" height="20%" textHAlign="left" font="Interface/Fonts/Kenney Future.ttf"/>
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9450949430465698, "perplexity": 2668.4574583875365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107889651.52/warc/CC-MAIN-20201025183844-20201025213844-00307.warc.gz"}
https://jmlr.org/beta/papers/v16/carpentier15a.html
# Adaptive Strategy for Stratified Monte Carlo Sampling Alexandra Carpentier, Remi Munos, András Antos. Year: 2015, Volume: 16, Issue: 68, Pages: 2231−2271 #### Abstract We consider the problem of stratified sampling for Monte Carlo integration of a random variable. We model this problem in a $K$-armed bandit, where the arms represent the $K$ strata. The goal is to estimate the integral mean, that is a weighted average of the mean values of the arms. The learner is allowed to sample the variable $n$ times, but it can decide on-line which stratum to sample next. We propose an UCB-type strategy that samples the arms according to an upper bound on their estimated standard deviations. We compare its performance to an ideal sample allocation that knows the standard deviations of the arms. For sub-Gaussian arm distributions, we provide bounds on the total regret: a distribution-dependent bound of order $\text{poly}(\lambda_{\min}^{-1})\tilde{O}(n^{-3/2})$ (The notation $a_n=\text{poly}(b_n)$ means that there exist $C$,$\alpha>0$ such that $a_n\le Cb_n^\alpha$ for $n$ large enough. Moreover, $a_n=\tilde{O}(b_n)$ means that $a_n/b_n=\text{poly}(\log n)$ for $n$ large enough.) that depends on a measure of the disparity $\lambda_{\min}$ of the per stratum variances and a distribution-free bound $\text{poly}(K)\tilde{O}(n^{-7/6})$ that does not. We give similar, but somewhat sharper bounds on a proxy of the regret. The problem- independent bound for this proxy matches its recent minimax lower bound in terms of $n$ up to a $\log n$ factor.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9778701066970825, "perplexity": 529.5532655773846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107896778.71/warc/CC-MAIN-20201028044037-20201028074037-00592.warc.gz"}
http://crypto.stackexchange.com/questions/11300/timing-attacks-on-ecdsa-ecdhe-aes-and-sha2/11304
# Timing Attacks on ECDSA, ECDHE, AES and SHA2 Are there any known timing attacks (both practical and theoretical) on any implementations of the following? 1. ECDSA (I'm aware of this one - are there any applicable to prime fields?), 2. ECDHE (again, over a prime field), 3. AES, 4. SHA2 - I'd expect typical implementations of SHA-2 to resist timing attakcs, but naive implementations of all others should suffer from timing attacks. What's tricky is figuring out in which situations they're practical and in which they are not. –  CodesInChaos Oct 26 '13 at 13:48 The attack which you link to, on ECDSA, is related to the following: the signer computes several values $kG$, for random $k$ values chosen uniformly modulo $n$ ($n$ is the size of the subgroup generated by $G$). One such value is generated for each signature. It is important that the selection is uniform: even small biases can be exploited in order to make a key recovery attack. The crucial point is that each ECDSA signature consists in a pair $(r,s)$ where $s = (m+xr)/k \pmod n$ where $m$ is known (it is the hash of the signed message). Each signature involves a new $k$, thus a new $r$ (because $r$ is obtained from $kG$); the $r$ value follow a definite distribution modulo $n$. So if $k$ are biased, then the bias easily turns into an information leak on $x$ (the private key). As an extreme case, if a $k$ value is reused, then the two corresponding signatures suffice to recompute $x$ directly (this is what happened to Sony with their PS3 firmware signature system). In the OpenSSL implementation of $kG$ on binary curves (using a variant of Montgomery's ladder), it so happens that the code was "optimizing" things by making $\lceil\log k\rceil$ iterations only; so that if $k$ was shorter than $n$ by, say, 3 bits (happens once every eight signatures or so) then the loop was slightly faster (3 less iterations). The attacker can detect that. If the attacker can make the target generate signatures on messages that the attacker knows (that's easy when the target is a SSL server; the attacker just has to connect), then the attacker can keep a subset of the signatures, precisely those where $k$ was "short". This becomes equivalent to a biased $k$ generation, and leads to private key reconstruction. I describe this to show the important point: it is not specific to elliptic curves over binary fields. In fact, it is not specific to elliptic curves. A classic square-and-multiply implementation of a modular exponentiation for plain DSA could be equally vulnerable. Elliptic curves (in prime fields or binary fields) are not especially weak or strong in that matter. It is best described as a weakness of DSA (and variants, including ElGamal signatures and Schnorr signatures). To protect against it, the implementation must take care to take a constant time in all cases, so as not to leak information on $k$. It seems unlikely that ECDH would suffer from a similar issue. With ECDH, leaking information about the private key length seems benign enough. The problem with DSA arises from leaking bias information on a value $k$ which is used along with the private key $x$ in "simple" operations (multiplication and division modulo a prime). Diffie-Hellman, be it modulo a prime or on an elliptic curve, does not feature such "simple operations" involving a randomly selected value and a permanent private key. Moreover, in the case of ECDHE, the final "E" means ephemeral, which more or less implies that the DH private key will not live long. SSL/TLS server may reuse the same DH private key for a few connections (as long as the private key is kept in RAM only, it fulfills the "ephemeral" contract), but normal SSL/TLS servers tend not to do that much. Generating a new DH private key is efficient. This correspondingly severely limits the power of the attacker: timing attacks are statistical in nature, which means that they need to work over several measures which involve the same secret value. With ephemeral DH keys, the private keys can be regenerated often enough to defeat timing attacks in a quite generic way. For AES, timing attacks are again based on implementation characteristics. Specifically, "normal" AES implementations use lookup tables, and thus exercise caches. The attack will work either on evicting parts of these tables from cache and then measuring the algorithm execution time (thus counting the number of cache misses involved during the execution), or on having the attacker's data evicted from cache when the AES block cipher runs, and then working out (again with timings) which parts of the data were impacted. In both cases, this allows the attacker to gain some information on the intermediate values used in the algorithm, leaking information on the key. The attack is quite feasible when the attacker can run his own code on the same machine, so as to use the same caches (this is a setup which applies to traditional multi-user mainframes, but also to virtual machines, where one VM may spy on another). For a remote attacker, this has also been demonstrated in lab conditions, but then it depends on a lot of parameters, and the remoteness makes execution time harder to measure with high precision; applicability to any specific real-life situation is hard to assess. There are table-free AES implementations (see this question) but they use specific CPU features, or are quite slower, or both. Recent x86 CPU have an hardware AES which is very fast, and immune to such timing attacks. SHA-2 is a family of hash functions. As such, they don't have keys. If they contain no secret, then there is nothing to obtain through timing attacks... A hash function may be used on secret data, though, in particular in HMAC. SHA-2 is all arithmetic operations, no tables, no data-dependent branches. It seems hard to make a SHA-2 implementation which would be susceptible to timing attacks. - Thank you for the detailed explanation. –  Chris Smith Oct 26 '13 at 22:14 ## Elliptic Curves over binary fields In naive implementation of Elliptic Curves, either $GF(p)$ or $GF(2^{n})$ will be vulnerable to some timing attacks. The paper you provided is on OpenSSL's implementation of EC with $GF(2^{n})$. This implementation uses Montgomery's ladder scalar multiplication, which is in fact very good for making sure that most of the multiplication runs in constant amount of time per round. The issue is that the algorithm does not always run constant number of rounds. ## Elliptic Curves over Prime Fields The Attack presented against binary fields above could work against some implementations of some Elliptic Curves. However, many of NIST prime fields are just little bit less than $2^{log2(p)}$. Therefore, the bits of $k$ can have only very small bias. In ECDHE, if the peer is the attacker, a lot more inputs are controlled by the peer, but keys have short life time. This paper is about attacking OpenSSL's RSA by measuring time used by a specific reduction at the end of Montgomery reduction. Although this is for RSA, similar reduction can be used if using Montgomery to calculate ECP (and thus it can be expected that some ECP implementations are likely vulnerable). ## AES Most AES implementations use table lookup, which means they usually are vulnerable to at least cache timing attacks. Paper describing timing attack against table lookup-based AES implementation on OpenSSL. There are ways to avoid using table lookups, but they have cost: • Either use new processor with AES instructions or • Use only constant time instructions to replace the table lookup (likely causes significant performance penalty) ## SHA-256 and SHA-512 SHA-256 and SHA-512 implementation only uses addition, rotation, and bitwise operations, all of which execute in constant time on most processors. The algorithms contain no input or state dependent table lookups. Thus, most likely implementations of these algorithms are timing attack free. ## Summary For most of the algorithms, i.e. ECDSA, ECDHE, AES the timing attack free implementation can be very hard to implement, and it may be also significantly slower. For these reasons, it is often not practical to make timing attack free implementation. In case you care about timing attacks, you need to ensure you use implementation expected to be timing attack resistant. For AES, it is common for hardware-based implementations to be timing attack resistant (but not all). For SHA-2, the most implementations appear to be likely side channel free. In ECDHE, the keys used are ephemeral. For this reason, although in theory it could be possible to make timing attack, it is harder to make practical timing attacks than against ECDSA or AES. - There is vast literature on timing attacks on AES, but to the best of my knowledge no such attack on SHA-2 or any construction that uses SHA-2 (e.g., HMAC-SHA256). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4343467652797699, "perplexity": 1274.0809688554903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535924501.17/warc/CC-MAIN-20140909014408-00053-ip-10-180-136-8.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/151398/how-to-derive-the-cdf-of-a-lognormal-distribution-from-its-pdf
# How to derive the cdf of a lognormal distribution from its pdf I'm trying to understand how to derive the cumulative distribution function for a lognormal distribution from its probability density function. I know that the pdf is: $$f(x) =\frac{e^{-\frac{1}{2}\Bigl(\frac{\ln(x) - \mu}{\sigma}\Bigr)^2}}{x \sigma \sqrt{2\pi}},\ x \gt 0$$ and the cdf is: $$\Phi(x) = \int_{-\infty}^x f(y) dy = \frac{1}{\sigma\sqrt{2\pi}}\int_0^x e^{-\frac{1}{2}\Bigl(\frac{\ln(y) - \mu}{\sigma}\Bigr)^2}\frac{dy}{y}.$$ Now, I don't know how to get the last formula. I tried the substitution $t=\ln(y)$ but I don't know how to deal with the $1/y$ term. Am I on the wrong way, or did I make any mistake? I'd like to know it for my personal knowledge, it's not for a class. • Please add the self-study tag and read its tag wiki, modifying your question as needed to comply with the outline there. May 8, 2015 at 11:00 If $Y$ ~ $\log N(\mu,\sigma^2)$ then $$f_Y(y) = \frac{1}{y\sqrt{2\pi \sigma^2}}\exp\left[-\frac{1}{2}\left(\frac{\log y-\mu}{\sigma}\right)^2\right]$$ Now, $$F_Y(y)=\int_{-\infty}^{y_1}f_Y(y)dy = \int_{-\infty}^{y_1}\frac{1}{y\sqrt{2\pi \sigma^2}}\exp\left[-\frac{1}{2}\left(\frac{\log y-\mu}{\sigma}\right)^2\right]dy$$ $$\frac{\log y - \mu}{\sigma} = z \implies \frac{dy}{y\sigma} = dz, \text{ and } z_1 = \frac{\log y_1 - \mu}{\sigma}$$ $$F_Y(y)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{z_1}\exp(-z^2/2)dz$$ Oh boy, that's a standard normal distribution. I think it's easy from here. • This appears to go a little beyond the 'hints and guidance' appropriate for self-study posts May 8, 2015 at 12:27 I suppose you have the error function in mind. You can always turn the CDF back into a normal CDF. Recall that $X$ is lognormal iff $\log X\sim N \left(\mu,\sigma^2 \right)$. 1) You need to take account of the Jacobian of the transformation 2) It might be easier to get the pdf from the cdf and then you might spot your error.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6137561798095703, "perplexity": 273.9904339144285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662675072.99/warc/CC-MAIN-20220527174336-20220527204336-00412.warc.gz"}
https://www.bradford-delong.com/2019/06/eg-vlbw.html
Eric Chyn, Samantha Gold, and Justine S. Hastings: The Returns to Early-Life Interventions for Very Low Birth Weight Children: "Administrative data from Rhode Island... regression discontinuity design... 1,500-gram threshold for Very Low Birth Weight (VLBW) status.... Threshold crossing causes more intense in-hospital care, in line with prior studies. Threshold crossing also causes a 0.34 standard deviation increase in test scores in elementary and middle school, a 17.1 percentage point increase in the probability of college enrollment, and 66,997 decrease in social program expenditures by age 14. We explore potential mechanisms driving these impacts... #noted
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3416022062301636, "perplexity": 16038.29482367597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671245.92/warc/CC-MAIN-20191122065327-20191122093327-00282.warc.gz"}
https://api.philpapers.org/rec/LEUAPA-3
# Aristotle's Physics: a critical guide Cambridge University Press (2015) Abstract Aristotle's study of the natural world plays a tremendously important part in his philosophical thought. He was very interested in the phenomena of motion, causation, place and time, and teleology, and his theoretical materials in this area are collected in his Physics, a treatise of eight books which has been very influential on later thinkers. This volume of new essays provides cutting-edge research on Aristotle's Physics, taking into account recent changes in the field of Aristotle in terms of its understanding of key concepts and preferred methodology. The contributions reassess the key concepts of the treatise, reconstruct Aristotle's methods for the study of nature, and determine the boundaries of his natural philosophy. Due to the foundational nature of Aristotle's Physics itself, the volume will be a must-read for all scholars working on Aristotle. Keywords No keywords specified (fix it) Categories (categorize this paper) Reprint years 2018 Buy this book $31.33 new (10% off)$32.99 from Amazon (6% off)   \$39.49 used   Amazon page ISBN(s) 9781107031463   9781108454186   1108454186   110703146X Options Mark as duplicate Export citation Request removal from index PhilArchive copy Upload a copy of this paper     Check publisher's policy     Papers currently archived: 69,114 Setup an account with your affiliations in order to access resources via your University's proxy server Configure custom proxy (use this if your affiliation does not provide a proxy) Chapters BETA ## References found in this work BETA Aristóteles, Física I-II.Lucas Angioni - 2009 - Editora da Unicamp. ## Citations of this work BETA Privation and the Principles of Natural Substance in Aristotle's Physics I.Sirio Trentini - 2018 - Dissertation, Ludwig Maximilians Universität, München ## Similar books and articles Aristotle Physics Book Viii.Daniel W. Graham (ed.) - 1999 - Clarendon Press. Aristotle: Physics Book VIII. [REVIEW]B. C. A. Morison & D. W. Graham - 2001 - Journal of Hellenic Studies 121:180-181. Aristotle the Physics.Philip Henry Aristotle, Francis Macdonald Wicksteed & Cornford - 1929 - Heinemann Putnam's Sons Harvard University Press. Aristotle’s Physics 5.1, 225a1-B5.John Bowin - 2019 - Philosophical Inquiry 43:147-164. Aristotle's Nicomachean Ethics: A Critical Guide.Paula Gottlieb - 2012 - British Journal for the History of Philosophy 20 (6):1205-1207. Selection From Physics. Aristotle - 2004 - In Tim Crane & Katalin Farkas (eds.), Metaphysics: A Guide and Anthology. Oxford University Press.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2582765221595764, "perplexity": 11520.95712122847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662588661.65/warc/CC-MAIN-20220525151311-20220525181311-00302.warc.gz"}
https://www.physicsforums.com/threads/calculating-equilibrium-point-of-a-network-of-springs.378274/
# Calculating equilibrium point of a network of springs 1. Feb 14, 2010 ### o_z Hi, I'm trying to find a way to calculate the resting position of a network of springs that is built as follows: n number of springs with identical k constant, but with different resting lengths are connected together at one end of each spring. The other end of each spring is fixed to some point in 3d space - meaning, that position cannot change by the spring, only the end that is connected to all the other springs can move. Now, if I move the fixed positions of all/some of the springs, how can I calculate the resting (equilibrium) position in space of the point in which all springs are connected? Ofer 2. Feb 15, 2010 ### aracharya Consider the "floating point". Each spring yields a force with an x component , a y component, and a z component. If we sum all the x components, we get the net force in the x direction. Likewise for y & z. For the "floating point" to be in equilibrium, the net force in each direction (x,y,z) should be zero. Therefore we have three equations that we have to http://en.wikipedia.org/wiki/Simultaneous_equations" [Broken] : Net force in x direction = 0 Net force in y direction = 0 Net force in z direction = 0 We have three equations and three unknowns (the coordinates of the floating point). Therefore we can solve these equations to get the equilibrium position of the floating point. Last edited by a moderator: May 4, 2017 3. Feb 15, 2010 ### aracharya Note however that not all solutions are valid equilibrium points. For example consider the case where the fixed points are distributed on a circle with equal angular spacings. Then if the springs are under tension, the centre of the circle is an equilibrium point. However if the springs are in compression, then the centre point, though a solution to the equations above, is not a valid equilibrium point since it is unstable. What determines whether a solution to the above is a true equilibrium point is whether the point corresponds to a minima of the total potential energy of the system (http://en.wikipedia.org/wiki/Minimum_total_potential_energy_principle" [Broken]) Last edited by a moderator: May 4, 2017 4. Feb 15, 2010 ### o_z Thanks for the replies! The basic problem I'm having is: how do I know the force that each spring yields if I don't know the equilibrium point? What I mean is, I know the force's strength, but not it's vector. So how can I solve those equations without knowing the different x,y and z elements of the force? In the mean time, I've written a simplified spring network solver for this specific case, that seems to solve the issue for my needs. But I'm still interested in knowing if there's a way to calculate that without simulation. Thanks, o
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9157053232192993, "perplexity": 385.62751865303494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105108.31/warc/CC-MAIN-20170818194744-20170818214744-00263.warc.gz"}
https://www.cut-the-knot.org/pigeonhole/SevenNumbers.shtml
# Seven integers under 127 and their Ratios Show that among seven distinct positive integers not greater than 126, one could find two of them, say $x$ and $y$, satisfying $1\lt \frac{y}{x}\le 2$. Prove also that 126 could not be replaced by 127. Solution Show that among seven distinct positive integers not greater than 126, one could find two of them, say $x$ and $y$, satisfying $1\lt \frac{y}{x}\le 2$. Prove also that 126 could not be replaced by 127. ### Solution 1 Arrange the seven given integers in ascending order: $x_{1}\lt x_{2}\lt \ldots x_{6}\lt x_{7}$. Let's assume that the required inequality does not hold, so that never $\frac{x_{i+1}}{x_{i}}\le 2$, i.e., $\frac{x_{i+1}}{x_{i}} \gt 2$, for $i=1,2,\ldots ,6$. This means that $1\lt \frac{x_{2}}{x_{1}} \gt 2,$ $1\lt \frac{x_{3}}{x_{2}} \gt 2,$ $1\lt \frac{x_{4}}{x_{3}} \gt 2,$ $1\lt \frac{x_{5}}{x_{4}} \gt 2,$ $1\lt \frac{x_{6}}{x_{5}} \gt 2,$ $1\lt \frac{x_{7}}{x_{6}} \gt 2$. Multiplying all six gives $\frac{x_{7}}{x_{1}} \gt 2^{6}=64$ or $64x_{1} \lt x_{7}\le 126$, implying that $x_{1}=1$. This means that the least value $x_{2}$ could have is $3 = 2x_{1}+1$. Similarly, the least $x_{3}$ could be is $7=2x_{2}+1$. Continuing this way we get by necessity the sequence $1, 3, 7, 15, 31, 63, x_{7}=127$. Contradiction. This sequence serves the counterexample sought in the second part of the problem. Anyway, the derivation shows that our assumption that $\frac{x_{i+1}}{x_{i}} \gt 2$, for $i=1,2,\ldots ,6$, is false, so for some $i$ we have the required inequality. ### Solution 2 [Barbeau et all] If the integers $1,2,3,\ldots ,126$ are split into $6$ sets, then by the Pigeonhole Principle, one of the sets will contain at least two of seven chosen integers. Thus, the problem is to arrange the splitting so that in each of the $6$ sets the largest integer is at most twice the smallest. Clearly, the following splitting does the trick: $\{1,2\}$, $\{3,4,5,6\}$, $\{7,8,\ldots ,13,14\}$, $\{15,16,\ldots ,29,30\}$, $\{31,32,\ldots ,61,62\}$, $\{63,64,\ldots ,125,126\}$. ### References 1. E. J. Barbeau, M. S. Klamkin, W. O. J. Moser, Five Hundred Mathematical Challenges, MAA, 1995, #13 • Six integers out of 10: Pigeonhole Principle • Pigeonhole Principle (Same sum) • Pigeonhole in a Matrix • Pigeonhole in Calendar • A nice puzzle modeled on the Petersen graph • Proizvolov's identity in a game format • Pigeonhole with Disjoint Intervals • All antichains • Euclid via Pigeonhole • Light Bulbs in a Circle (an Interactive Gizmo)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8373092412948608, "perplexity": 517.3357423840998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583662690.13/warc/CC-MAIN-20190119054606-20190119080606-00195.warc.gz"}
https://ftp.toniskittyrescue.org/forum/site/article.php?tag=aspirin-chemical-name-d0b919
# aspirin chemical name Matloff Aspen will produce the you want. Acetylsalicylic acid also acts on the hypothalamus to produce antipyresis; heat dissipation is increased as a result of vasodilation and increased peripheral blood flow. [xxxi]  biochemistry rendezvoused with a branch of science marching from the aspirin emerged in 1899 in one of the world’s first industrial A new class of NSAID, COX2 inhibitor, is designed to It catalyzes the synthesis of prostaglandins that perform many living tissues. They are pharmacologist John Vane.[xix]. An occasional patient may note tinnitus at lower plasma concentrations of salicylate. pharmaceutical products. A 52 year-old woman ingested approximately 300 tablets (325 mg) of aspirin in a suicide attempt. Nevertheless, his systematic approach and attempt Aspirin (but not other salicylates) inhibits platelet aggregation induced by epinephrine or low concentrations of collagen but not that induced by thrombin or high concentrations of collagen. In general, 20-60% of the dose is absorbed if the suppository is retained for 2-4 hours and 70-100% is absorbed if the suppository is retained for at least 10 hours. To them COX2 Vane’s success attracted many researchers to rather blunt. Flower, COX2. The half-life of aspirin was 31 minutes. However, the use of salicylates is contraindicated in patients with chronic liver disease. The materno-fetal transfer of salicylic acid and its distribution in the fetal organism was investigated in women of early pregnancy. /SALICYLATES/, Goodman, L.S., and A. Gilman. process itself. conjecture: aspirin inhibits the production of prostaglandins.[xx]. Complete inactivation of platelet COX-1 is achieved when 160 mg of aspirin is taken daily. research was stymied, partly because his tools and techniques were (IMAGE CAN'T COPY). contains, besides red and white blood cells, partial cells called the microscope. In reducing the risk of blood clots, Imagine a mold that stamps a rod and a bowl That is why governments require drugs to pass clinical Goodman and Gilman's The Pharmacological Basis of Therapeutics. approach than the trial and error that led to the first use of reveals other nexus and patterns that no one had dreamed of before. platelets. Aspirin aluminum. Others, for acid. the effects of aspirin are inconclusive.[xxxv]. They have different known precisely. Fact Check: What Power Does the President Really Have Over State Governors? important roles in the pharmaceutical and life science industries; It is believed that the analgesic and other desirable properties of aspirin are due not to the aspirin itself but rather to the simpler compound salicylic acid, $\mathrm{C}_{6} \mathrm{H}_{4}(\mathrm{OH}) \mathrm{CO}_{2} \mathrm{H},$ which results from the breakdown of aspirin in the stomach. [xix]. However, they do not prevent cardiac or other complications associated with this condition. 11th ed. Moreover, aspirin reduced the level of p38 MAPK phosphorylation. which worldly sciences thrive.[ix]. pipeline. (Tylanol) is not a NSAID, because it does not suppress Physicians administered the two compounds to patients, whose The fall of COX2 inhibitors reveals the Concurrent treatment with a glucocorticoid or a disease-modifying antirheumatic agent may be needed, depending on the condition being treated and patient response. This The disc-like platelets are produced in the bone marrow Isolation processes may destroy the complexity Following bacterial stroke. Success breeds competition. research supervisor Carl Duisberg, a chemist who later became the [xiv]  VET: Dogs tolerate aspirin better than cats; however, prolonged use can lead to the development of gastric ulcers. Chemistry was fast transforming from alchemy In late 2004, a major trial found that terms of COX inhibition? Go to your Tickets dashboard to see if you won! experiments designed and launched, data analyzed and reanalyzed and Flower, COX2. Chances are it contains aspirin. History of aspirin and its forming blood clots. diseases.[iv]. Molecular Formula. Inhibition of COX1 is responsible for the drugs’ side However, arteries. blood flow to the heart, resulting in damages of heart tissues, G. Dutfield, Intellectual When these patients were given the same dose of buffered aspirin (as 6 tablets, each containing 325 mg of aspirin), peak plasma aspirin concentrations of about 14-18 ug/mL occurred within 1-2 hours and peak plasma salicylate concentrations of about 140-160 ug/mL occurred within 1-2 hours. researchers into the drug area, and after initial sucesses set up in 282: 1961-1963 (1999). Collier experimented with whole animals such as Journal of such as cholesterol. again come into play. He directed company into a rainbow of colors. In solution with alkalis, the hydrolysis proceeds rapidly and the clear solutions formed may consist entirely of acetate and salicylate.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5712740421295166, "perplexity": 8179.399088380975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519923.26/warc/CC-MAIN-20210120054203-20210120084203-00611.warc.gz"}
https://byjus.com/maths/square-root/?replytocom=39225
# Square Root The Square root of a number is the value which when multiplied by itself gives the original number. For example, the square root of 4 is 2, because when we multiply 2 by itself, it gives the result as 4 only. If a is a number, then the square root of a is represented as √a. The square root is an important function of principal mathematics, that was found and developed a long time ago. Its history originates from all across the world, spanning from Ancient Greece to Ancient India.  In this article, let us discuss in detail about the definition, formula and get the values of square roots from 1 to 25  with some shortcut tricks. ## Definition The square root of any number gives the same number when multiplied by itself. For example, √(m.m) = √(m2) = m Also, The square root function of a number that helps to map a set of non-negative real numbers onto itself. It is usually represented as: f(x) = √x For example, the square root of 2 is denoted as √2. To gain a better understanding of square roots, let us take another example as this will show us exactly how to find a square root of a number. Consider a number ‘a’ acting as the square root of a number ‘y’ such that y2 = a, where multiplying the number y with itself will lead to its square root (y.y). ## Square Root Symbol The square root symbol is also called as a radical symbol and is usually represented as ‘. To represent a number ‘x’ as a square root using this symbol can be written as: √xwhere x is the number itself. The number under the radical symbol is called the radicand. For example, the square root of 6 is also represented as radical of 6. Both represent the same value. ## Square Root Formula Using the above explanation, we can conclude that, y.y = y2 = a; where ‘a’ is the square of a number ‘y’ or we can say y is equal to square root of a,i.e., y = √a ## How to Solve Square Root Equation To solve the square root equation we need to follow the below steps: Isolate the square to one of the sides (L.H.S or R.H.S) Square both the sides of the given equation Now solve the rest equation Let us understand the steps with examples. Example: Solve √(4a+9) – 5 = 0 Solution: Given, √(4a+9) – 5 = 0 Isolate the square root term first. √(4a+9) = 5 Now on squaring both the sides, we get; 4a+9 = 52 4a + 9 = 25 4a = 16 a = 16/4 a = 4 ## Square Roots Table Here is the list of the square root of numbers from 1 to 50. √1 1 √18 4.2426 √35 5.9161 √2 1.4142 √19 4.3589 √36 6 √3 1.7321 √20 4.4721 √37 6.0828 √4 2 √21 4.5826 √38 6.1644 √5 2.2361 √22 4.6904 √39 6.2450 √6 2.4495 √23 4.7958 √40 6.3246 √7 2.6458 √24 4.899 √41 6.4031 √8 2.8284 √25 5 √42 6.4807 √9 3 √26 5.099 √43 6.5574 √10 3.1623 √27 5.1962 √44 6.6332 √11 3.3166 √28 5.2915 √45 6.7082 √12 3.4641 √29 5.3852 √46 6.7823 √13 3.6056 √30 5.4772 √47 6.8557 √14 3.7417 √31 5.5678 √48 6.9282 √15 3.873 √32 5.6569 √49 7 √16 4 √33 5.7446 √50 7.0711 √17 4.1231 √34 5.831 ## How do you Figure out Square Roots? Many mathematical operations have an inverse operation such as division is the inverse of multiplication, subtraction is the inverse of addition. Similarly, Squaring has an inverse operation called square root.  Finding the square root of a number is the inverse operation of squaring that number. Example: Square of 7 = 7 x 7 = 72 = 49 The square root of 49, 49 = 7 But for some values such as square root of 10, 13, 17, we cannot use this method, as these numbers are not perfect square numbers. ## How to Find Square Roots Without Calculator? This is quite an interesting way to figure out the square root of a given number. The procedure completely based on the method called “guess and check” Guess your answer, and verify. Repeat the procedure until you have the desired accurate result. Mostly used to find the square roots of numbers that aren’t perfect squares. We can also use the long division method to find the square root of a number. ## Squares and Square Root Let us see the value of squares and the square root of squares of the same numbers here. Numbers Squares Square root 0 02 = 0 √0 = 0 1 12 = 1 √1 = 1 2 22 = 4 √4 = 2 3 32 = 9 √9 = 3 4 42 = 16 √16 = 4 5 52 = 25 √25 = 5 6 62 = 36 √36 = 6 7 72 = 49 √49 = 7 8 82 = 64 √64 = 8 9 92 = 81 √81 = 9 10 102 = 100 √100 = 10 ## Applications of Square Roots The square root formula is an important section of mathematics that deals with many practical applications of mathematics and it also has its applications in other fields such as computing. We can use the square root values while simplifying the problem. For example, if we are having the equation in the radical form such as the square root of 5(√5), it is quite difficult to solve the equation. So, the substitution of the square root value helps to simplify the equation. Computation can simply be done using our personal calculators that have the square root function within them to get the square root of any number. It also has a whole list of other functions available such as in geometry, where one can easily map the area of a square to its side length. It also has a whole host of other interrelated mathematical functions and uses; such as its applications in finding out formulas for roots of quadratic integers, quadratic fields and quadratic equations. ## Square Roots Examples Let us understand this concept with the help of an example: Example 1: Solve 10 to 2 decimal places. Solution: Step 1:  Select any two perfect square roots that you feel your number may fall in between. 22 = 4; 32 = 9, 42 = 16 and 52 = 25 Choose 3 and 4 (as 10 lies between these two numbers) Step 2: Divide given number by one of those selected square roots. Divide 10 by 3. => 10/3 = 3.33 (round off answer at 2 places) Step 3: Find the average of root and the result from the above step i.e. (3 + 3.33)/2 = 3.1667 Verify: 3.1667 x 3.1667 = 10.0279 (Not required) Repeat step 2 and step 3 Now 10/3.1667 = 3.1579 Average of 3.1667 and 3.1579. (3.1667+3.1579)/2 = 3.1623 Verify: 3.1623 x 3.1623 = 10.0001 (more accurate) Example 2: Find the square roots of whole numbers perfect squares from 1 to 100. Solution: The perfect squares from 1 to 100: 1, 4, 9, 16, 25, 36, 49, 64, 81, 100 Square root Result √1 1 √4 2 √9 3 √16 4 √25 5 √36 6 √49 7 √64 8 √81 9 √100 10 Example 3: What is, 1. Square root of 2 2. Square root of 3 3. Square root of 4 4. Square root of 5 Solution: Use square root list, we have 1. value of root 2 i.e. √2 = 1.4142 2. value of root 3 i.e. √3 = 1.7321 3. value of root 4 i.e. √4 = 2 4. value of root 5 i.e. √5 = 2.2361 Example 4: Is square Root of a Negative Number a whole number? Solution: No, As per the square root definition, negative numbers shouldn’t have a square root. Because if we multiply two negative numbers result will always be a positive number.  Square roots of negative numbers expressed as multiples of i (imaginary numbers). Practice Problems: 1. Simplify √142 2. Find the value of √12. 3. Are √155, √121 and √139 perfect squares? To learn more about square roots and other maths topics in a more engaging and effective way, register with BYJU’S- The Learning App. #### 1 Comment 1. Mulk sher khan
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8548662066459656, "perplexity": 924.8071537914304}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370495413.19/warc/CC-MAIN-20200329171027-20200329201027-00337.warc.gz"}
https://threesixty360.wordpress.com/2008/07/12/saturday-smorgasbord/
## Saturday Smorgasbord by Now is the time on Sprockets 360 when we plagiarize share some of the fine things going on around the Internet. Are you looking for some fun math problems to do? You can try out the bi-weekly Monday Math Madness. MMM #10 at Blinkdagger is a probability problem with a twist: if you know the probability of a group winning the lottery over 25 years, what is the probability of someone in the group winning the lottery at least once in a 5-year period? Solutions are due by Monday night/Tuesday morning at midnight. MMM #11 will appear the following Monday on Wild About Math. There’s also a new regular math problem being posted. Walking Randomly has started posting an Integral of the Week. Integral #1 is $\int \sqrt{\tan(x)} \; dx$. (Incidentally, if you google “Walking Randomly” then the program suggests that you mean “Working Randomly”, which feels rather like my summer.) Finally, why was 6 afraid of 7? Because 7 8 9. (Thank you, thank you, I’ll be here all week.) The Barenaked Ladies said it much better on this YouTube video, which I actually found here at Wiskundermeisjes, a site created by Ionica Smeets and Jeanine Daems. (This isn’t the first time I’ve stolen borrowed from them, either — I originally found the idea for Godzilla’s Sierpinski Cookies from Evil Mad Scientist Laboratories on their site. Indeed, their site really makes me wish I could speak Dutch.) Tags: , ### 2 Responses to “Saturday Smorgasbord” 1. Ionica Says: Great that you like our site! We deliberately decided to write our site in Dutch, since we try to write about mathematics for a broad audience. In another language it is much harder to get all the subtle details right and to make cute little word jokes. The advantage is that we are quite well-known in The Netherlands. The big disadvantage is of course that the rest of the world can not read our blog… 2. Ξ Says: I’ve been impressed with the web-translations, though (via Google or Babel Fish, and there are probably others). It’s a little more work and it doesn’t pick up on all the nuances the way that someone who understood the language would, but it does allow me to get the jist of your site and some others in languages I don’t speak.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.51157146692276, "perplexity": 1628.1783570217326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661123.53/warc/CC-MAIN-20160924173741-00274-ip-10-143-35-109.ec2.internal.warc.gz"}
http://openstudy.com/updates/55d3d6f8e4b0554d6272b914
Here's the question you clicked on: 55 members online • 0 viewing ## anonymous one year ago Solve the following equations for x (8th grade math) Delete Cancel Submit • This Question is Closed 1. anonymous • one year ago Best Response You've already chosen the best response. 0 $X ^{2}=\frac{ 4 }{ 36 }$ 2. EXOEXOEXO • one year ago Best Response You've already chosen the best response. 1 what is the square root of 4? what is the square root of 36? 3. anonymous • one year ago Best Response You've already chosen the best response. 0 2/6 4. EXOEXOEXO • one year ago Best Response You've already chosen the best response. 1 yes 5. Not the answer you are looking for? Search for more explanations. • Attachments: Find more explanations on OpenStudy ##### spraguer (Moderator) 5→ View Detailed Profile 23 • Teamwork 19 Teammate • Problem Solving 19 Hero • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9996129870414734, "perplexity": 13012.510793830023}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719215.16/warc/CC-MAIN-20161020183839-00091-ip-10-171-6-4.ec2.internal.warc.gz"}
http://footwww.academickids.com/encyclopedia/index.php/Space_elevator
# Space elevator Missing image Space_elevator_structural_diagram.png "The Space Elevator" would consist of a cable attached to the surface and reaching outwards into space. By positioning it so that the total centrifugal force exceeds the total gravity, either by extending the cable or attaching a counterweight, the elevator would stay in place geosynchronously. Once sent far enough, climbers would be accelerated further by the planet's rotation. This diagram is not to scale. A space elevator, also known as a space bridge, is a fixed structure from the Earth's surface into space for carrying payloads. Plausible techniques for building a space elevator include beanstalks or Space fountains or even certain very tall compressive structures, similar to those used for aerial masts. A Space fountain would use particles fired up from the ground to form a dynamic, quasi-compressive structure. However, space fountains and tall compressive structures, whilst possibly reaching the agreed altitude for space (100 km), are unlikely to reach orbit and would require additional rocket or other means to leave the Earth. A beanstalk (see Jack and the Beanstalk), on the other hand, is an orbital space elevator that uses a cable that 'hangs down' to the surface from synchronous orbit. It is also called a geosynchronous orbital tether, and is one kind of skyhook. A beanstalk attached to the Earth could eventually permit delivery of great quantities of cargo and people to orbit, and at costs only a fraction of those associated with current means. Construction would be a vast project: a beanstalk would have to be built of a material that could endure tremendous stress while also being light-weight, cost-effective, and manufacturable. Today's materials technology does not quite meet these requirements. A considerable number of other novel engineering problems would also have to be solved to make a space elevator practical. Not all problems regarding feasibility have yet been addressed. Nevertheless, optimists say that we could develop the necessary technology by 2008 [1] (http://liftport.com/research2.php) and finish building the first space elevator by 2018 [2] (http://www.space.com/businesstechnology/technology/space_elevator_020327-1.html) [3] (http://www.isr.us/research_es_se.asp). It should be noted that early elevators would be restricted to cargo due to radiation shielding issues. Contents ## Physics and structure One concept for the space elevator has it tethered to a mobile seagoing platform. There are a variety of beanstalk designs. Almost every design includes a base station, a cable, climbers, and a counterweight. ### Base station The base station designs typically fall into two categories—mobile and stationary. Mobile stations are typically large oceangoing vessels. Stationary platforms are generally located in high-altitude locations. Mobile platforms have the advantage of being able to maneuver to avoid high winds and storms. While stationary platforms don't have this, they typically have access to cheaper and more reliable power sources, and require a shorter cable. While the decrease in cable length may seem minimal (typically no more than a few kilometers), that can significantly reduce the width of the cable at the center (especially on materials with low tensile strength), and reduce the minimal length of cable reaching beyond geostationary orbit significantly. ### Cable The cable must be made of a material with an extremely high tensile strength/density ratio (the limit to which a material can be stretched without irreversibly deforming divided by its density). A space elevator can be made relatively economically if a cable with a density similar to graphite, with a tensile strength of ~65–120 GPa can be produced in bulk at a reasonable price. By comparison, most steel has a tensile strength of under 1 GPa, and the strongest steels no more than 5 GPa, but steel is heavy. The much lighter material Kevlar has a tensile strength of 2.6–4.1 GPa, while quartz fiber can reach upwards of 20 GPa; the tensile strength of diamond filaments would theoretically be minimally higher. Carbon nanotubes have exceeded all other materials and appear to have a theoretical tensile strength and density that is well within the desired range for space elevator structures, but the technology to manufacture bulk quantities and fabricate them into a cable has not yet been developed. While theoretically carbon nanotubes can have tensile strengths beyond 120 GPa, in practice the highest tensile strength ever observed in a single-walled tube is 63 GPa, and such tubes averaged breaking between 30 and 50 GPa. Even the strongest fiber made of nanotubes is likely to have notably less strength than its components. Further research on purity and different types of nanotubes will hopefully improve this number. A seagoing anchor station would incidentally act as a deep-water seaport. Most designs call for single-walled carbon nanotubes. While multi-walled nanotubes may attain higher tensile strengths, they have notably higher mass and are consequently poor choices for building the cable. One potential material possibility is to take advantage of the high pressure interlinking properties of carbon nanotubes of a single variety. [4] (http://prola.aps.org/pdf/PRB/v62/i19/p12648_1). While this would cause the tubes to lose some tensile strength by the trading of sp2 bonds (graphite, nanotubes) for sp3 (diamond), it will enable them to be held together in a single fiber by more than the usual, weak Van der Waals force (VdW), and allow manufacturing of a fiber of any length. The technology to spin regular VdW-bonded yarn from carbon nanotubes is just in its infancy: the first success to spin a long yarn as opposed to pieces of only a few centimeters has been reported only very recently; but the strength/weight ratio was worse than Kevlar due to inconsistent type construction and short tubes being held together by VdW. (March 2004). Note that as of 2004, carbon nanotubes have an approximate price higher than gold at $100/gram, and 20 million grams would be necessary to form even a seed elevator. This price is decreasing rapidly, and large-scale production would reduce it further, but the price of suitable carbon nanotube cable is anyone's guess at this time. The cable material is an area of fierce worldwide research, the applications of successful material go much further than space elevators; this is good for space elevators because it is likely to push down the price of the cable material further. Other suggested application areas include suspension bridges, new composite materials, better rockets, lighter aircraft etc. etc. #### Cable taper Due to its enormous length a space elevator cable must be carefully designed to carry its own weight as well as the smaller weight of climbers. In an ideal cable the stress would be constant throughout the whole length, which means at each point tapering the cable in proportion to the total weight of the cable below. Using a model that takes into account the Earth's gravitational and centrifugal forces (and neglecting the smaller Sun and Lunar effects), it is possible to show that the cross-sectional area of the cable as a function of height looks like this: [itex] A(r) = A_{0} \ \exp \left[ \frac{\rho}{s} \left[ \begin{matrix}\frac{1}{2}\end{matrix} \omega^{2} (r_{0}^{2} - r^2) + g_{0}r_{0} (1 - \frac{r_{0}}{r}) \right] \right] [itex] Where [itex] A(r) [itex] is the cross-sectional area as a function of distance [itex] r [itex] from the earth's center. The constants in the equation are: • [itex] A_{0} [itex] is the cross-sectional area of the cable on the earth's surface. • [itex] \rho [itex] is the density of the material the cable is made out of. • [itex] s [itex] is the tensile strength of the material. • [itex] \omega [itex] is the rotational frequency of the earth about its axis, 7.292 × 10-5 radian per second). • [itex] r_{0} [itex] is the distance between the earth's center and the base of the cable. It is approximately the earth's equatorial radius, 6378 km. • [itex] g_{0} [itex] is the acceleration due to gravity at the cable's base, 9.780 m/s². This equation gives a shape where the cable thickness initially increases rapidly in an exponential fashion, but slows at an altitude a few times the earth's radius, and then gradually becomes parallel when it finally reaches maximum thickness at geosynchronous orbit. The cable thickness then decreases again out from geosynchronous orbit. Thus the taper of the cable from base to GEO (r = 42,164 km), [itex] \frac{A(r_{\mathrm{GEO}})}{A_0} = \exp \left[ \frac{\rho}{s} \times 5.294 \times 10^{10} \, \mathrm{ \frac{m^2}{s^2} } \right] [itex] Using the density and tensile strength of steel, and assuming a diameter of 1 cm at ground level yields a diameter of several hundred kilometers (!) at geostationary orbit height, showing that steel, and indeed most materials used in present day engineering, are unsuitable for building a space elevator. The equation shows us that there are four ways of achieving a more reasonable thickness at geostationary orbit: • Using a lower density material. Not much scope for improvement as the range of densities of most solids that come into question is rather narrow, somewhere between 1000 and 5000 kg/m³ • Using a higher strength material. This is the area where most of the research is focused. Carbon nanotubes are tens of times stronger than the strongest types of steel, hugely reducing the cable's cross-sectional area at geostationary orbit. • Increasing the height of a tip of the base station, where the base of cable is attached. The exponential relationship means a small increase in base height results in a large decrease in thickness at geostationary level. Towers of up to 100 km high have been proposed. Not only would a tower of such height reduce the cable mass, it would also avoid exposure of the cable to atmospheric processes. • Making the cable as thin as possible at its base. It still has to be thick enough to carry a payload however, so the minimum thickness at base level also depends on tensile strength. A cable made of carbon nanotube would typically be just a millimeter wide at the base. ### Climbers Missing image SpaceElevatorInClouds.jpg Most space elevator designs call for a climber to move autonomously along a stationary cable. A space elevator cannot be an elevator in the typical sense (with moving cables) due to the need for the cable to be significantly wider at the center than the tips at all times. While designs employing smaller, segmented moving cables along the length of the main cable have been proposed, most cable designs call for the "elevator" to climb up the cable. Climbers cover a wide range of designs. On elevator designs whose cables are planar ribbons, some have proposed to use pairs of rollers to hold the cable with friction. Other climber designs involve moving arms containing pads of hooks, rollers with retracting hooks, magnetic levitation (unlikely due to the bulky track required on the cable), and numerous other possibilities. Power is a significant obstacle for climbers. Energy storage densities, barring significant advances in compact nuclear power, are unlikely to ever be able to store the energy for an entire climb in a single climber without making it weigh too much. Some solutions have involved laser or microwave power beaming. Others have gained part of their energy through regenerative braking of down-climbers passing energy to up-climbers as they pass, magnetospheric braking of the cable to dampen oscillations, tropospheric heat differentials in the cable, ionospheric discharge through the cable, and other concepts. The primary power methods (laser and microwave power beaming) have significant problems with both efficiency and heat dissipation on both sides, although with optimistic numbers for future technologies, they are feasible. Climbers must be paced at optimal timings so as to minimize cable stress, oscillations, and maximize throughput. The weakest point of the cable is near its planetary connection; new climbers can typically be launched so long as there are not multiple climbers in this area at once. An only-up elevator can handle a higher throughput, but has the disadvantage of not allowing energy recapture through regenerative down-climbers. Additionally, as one cannot "leap out of orbit", an only-up elevator would require another method to let payloads/people get rid of their orbital energy, such as conventional rockets. Finally, only-up climbers that don't return to earth must be disposable; if used, they should be modular so that their components can be used for other purposes in geosynchronous orbit. In any case, smaller climbers have the advantage over larger climbers of giving better options for how to pace trips up the cable, but may impose technological limitations. ### Counterweight There have been two dominant methods proposed for dealing with the counterweight need: a heavy object, such as a captured asteroid, positioned past geosynchronous orbit; and extending the cable itself well past geosynchronous orbit. The latter idea has gained more support in recent years due to the simplicity of the task and the ability of a payload that travels to the end of the counterweight-cable to be flung off as far as Saturn (and farther using gravitational assists from planets). ### Launching into outer space As a payload is lifted up a space elevator, it gains not only altitude but angular momentum as well. This angular momentum is taken from Earth's own rotation. As the payload climbs it "drags" on the cable, causing it to tilt very slightly to the west (lagging behind slightly on the Earth's rotation). The horizontal component of the tension in the cable applies a tangential pull on the payload, accelerating it eastward. Conversely, the cable pulls westward on Earth's surface, insignificantly slowing it. The opposite process occurs for payloads descending the elevator, tilting the cable eastwards and very slightly increasing Earth's rotation speed. In both cases the centrifugal force acting on the cable's counterweight causes it to return to a vertical orientation, transferring momentum between Earth and payload in the process. We can determine the velocities that might be attained at the end of Pearson's 144,000 km tower (or cable). At the end of the tower, the tangential velocity is 10.93 kilometers per second which is more than enough to escape Earth's gravitational field and send probes as far out as Saturn. If an object were allowed to slide freely along the upper part of the tower, a velocity high enough to escape the solar system entirely would be attained. This is accomplished by trading off overall angular momentum of the tower (and the Earth) for velocity of the launched object, in much the same way one snaps a towel or throws a lacrosse ball. For higher velocities, the cargo can be electromagnetically accelerated, or the cable could be extended, although that would require additional strength in the cable. ### Extraterrestrial elevators A space elevator could also be constructed on some of the other planets, asteroids and moons. A Martian tether could be much shorter than one on Earth. Mars' gravity is 38% of Earth's, while it rotates around its axis in about the same time as Earth. Because of this, Martian areostationary orbit is much closer to the surface, and hence the elevator would be much shorter. Exotic materials might not be required to construct such an elevator. However, building a Martian elevator would be a unique challenge because the Martian moon Phobos is in a low orbit, and intersects the equator regularly (twice every orbital period of 11 h 6 min). A collision between the elevator and the 22.2 km diameter moon would have to be avoided through active steering. A lunar space elevator would need to be very long—more than twice the length of an Earth elevator, but due to the low gravity of the moon, can be made of existing engineering materials. Alternatively, due to the lack of atmosphere on the moon, a rotating tether could be used with its center of mass in orbit around the moon with a counterweight at the short end and a payload at the long end. The path of the payload would be an epicycloid around the moon, touching down at some integer number of times per orbit. Thus, payloads are lifted off the surface of the moon, and flung away at the high point of the orbit. Rapidly spinning asteroids or moons could use cables to eject materials in order to move the materials to convenient points, such as Earth orbits; or conversely, to eject materials in order to send the bulk of the mass of the asteroid or moon to Earth orbit or a Lagrangian point. This was suggested by Russell Johnston in the 1980s. Freeman Dyson has suggested using such smaller systems as power generators at points distant from the Sun where solar power is uneconomical. ## Construction The construction of a space elevator would be a vast project, requiring advances in engineering and physical technology. NASA has identified "Five Key Technologies for Future Space Elevator Development": 1. Material for cable (e.g. carbon nanotube and nanotechnology) and tower 2. Tether deployment and control 3. Tall tower construction 4. Electromagnetic propulsion (e.g. magnetic levitation) 5. Space infrastructure and the development of space industry and economy Two different ways to deploy a space elevator have been proposed. ### Traditional way One early plan involved lifting the entire mass of the elevator into geosynchronous orbit, and simultaneously lowering one cable downwards towards the Earth's surface while another cable is deployed upwards directly away from the Earth's surface. Tidal forces (gravity and centrifugal force) would naturally pull the cables directly towards and directly away from the Earth and keeps the elevator balanced around geosynchronous orbit. However, this approach requires lifting hundreds or even thousands of tons on conventional rockets. This would be very expensive. ### Brad Edwards' proposal Brad Edwards, Director of Research for the Institute for Scientific Research (ISR), based in Fairmont, West Virginia, is a leading authority on the space elevator concept. He proposes that a single hairlike 20 short ton (18 metric ton) 'seed' cable be deployed in the traditional way, giving a very lightweight elevator with very little lifting capacity. Then, progressively heavier cables would be pulled up from the ground along it, repeatedly strengthening it until the elevator reaches the required mass and strength. This is much the same technique used to build suspension bridges. Although 20 short tons for a seed cable may sound like a lot, it would actually be very lightweight — the proposed average mass is about 0.2 kilogram per kilometer. Conventional copper telephone wires running to consumer homes weigh about 4 kg/km. Twenty tons is slightly less than a Russian geosynchronous communication satellite. ### Other designs These are far less well developed, and will be mentioned here only in passing. If the cable provides a useful tensile strength of about 62.5 Gpa or above, then it turns out that a constant width cable can reach beyond Geosynchronous orbit without breaking under its own weight. The far end can then be turned around and passed back down to the earth forming a constant width loop. The two sides of the loop are naturally kept apart by coriolis forces due to the rotation of the earth and the cable. By exponentially increasing the thickness of the cable from the ground a very quick buildup of a new elevator may be performed (it helps that no active climbers are needed, and power is applied mechanically.) However, because the loop runs at constant speed, joining and leaving the loop may be somewhat challenging, and the strength of the loop is lower than a conventional tapered design, reducing the maximum payload that can be carried without snapping the cable [5] (http://www.mit.edu/people/gassend/publications/ExponentialTethers.pdf) Other structures such as mechanically-linked multiple looped designs hanging off of a central exponential tether might also be practical, and would seem to avoid the laser power beaming; this design has higher capacity than a single loop, but still requires perhaps twice as much tether material. ## Failure modes and safety issues As with any structure, there are a number of ways in which things could go wrong. A space elevator would present a considerable navigational hazard, both to aircraft and spacecraft. Aircraft could be dealt with by means of simple air-traffic control restrictions, but impacts by space objects (in particular, by meteoroids and micrometeorites) pose a more difficult problem. ### Satellites If nothing were done, essentially all satellites with perigees below the top of the elevator will eventually collide. Twice per day, each orbital plane intersects the elevator, as the rotation of the Earth swings the cable around the equator. Usually the satellite and the cable will not line up. However, eventually, except for synchronized orbits, the elevator and satellite will be in the same place at the same time and there will be a disaster. Most active satellites are capable of some degree of orbital maneuvering and could avoid these predictable collisions, but inactive satellites and other orbiting debris would need to be either preemptively removed from orbit by "garbage collectors" or would need to be closely watched and nudged whenever their orbit approaches the elevator. The impulses required would be small, and need be applied only very infrequently; a laser broom system may be sufficient to this task. In addition, Brad Edward's design actually allows the elevator to move out of the way, because the fixing point is at sea and mobile. Further, transverse oscillations of the cable could be controlled so as to ensure that the cable avoids satellites on known paths -- the required amplitudes are modest, relative to the cable length. ### Meteoroids and micrometeorites Meteoroids present a more difficult problem, since they would not be predictable and much less time would be available to detect and track them as they approach Earth. It is likely that a space elevator would still suffer impacts of some kind, no matter how carefully it is guarded. However, most space elevator designs call for the use of multiple parallel cables separated from each other by struts, with sufficient margin of safety that severing just one or two strands still allows the surviving strands to hold the elevator's entire weight while repairs are performed. If the strands are properly arranged, no single impact would be able to sever enough of them to overwhelm the surviving strands. Far worse than meteoroids are micrometeorites; tiny high-speed particles found in high concentrations at certain altitudes. Avoiding micrometeorites is essentially impossible, and they will ensure that strands of the elevator are continuously being cut. Most methods designed to deal with this involve a design similar to a hoytether or to a network of strands in a cylindrical or planar arrangement with two or more helical strands. Creating the cable as a mesh instead of a ribbon helps prevent collateral damage from each micrometeorite impact. It is not enough, however, that other fibers be able to take over the load of a failed strand — the system must also survive the immediate, dynamical effects of fiber failure, which generates projectiles aimed at the cable itself. For example, if the cable has a working stress of 50 GPa and a Young's modulus of 1000 GPa, its strain will be 0.05 and its stored elastic energy will be 1/2 × 0.05 × 50 GPa = 1.25×109 joules per cubic meter. Breaking a fiber will result in a pair of de-tensioning waves moving apart at the speed of sound in the fiber, with the fiber segments behind each wave moving at over 1,000 m/s (more than the muzzle velocity of an M16 rifle). Unless these fast-moving projectiles can be stopped safely, they will break yet other fibers, initiating a failure cascade capable of severing the cable. The challenge of preventing fiber breakage from initiating a catastrophic failure cascade seems to be unaddressed in the current (January, 2005) literature on terrestrial space elevators. Problems of this sort would be easier to solve in lower-tension applications (e.g., lunar elevators). ### Corrosion Corrosion is a major risk to any thinly built tether (which most designs call for). In the upper atmosphere, atomic oxygen steadily eats away at most materials. A tether will consequently need to either be made from a corrosion-resistant material or have a corrosion-resistant coating, adding to weight. Gold and platinum have been shown to be practically immune to atomic oxygen; several far more common materials such as aluminum are damaged very slowly and could be repaired as needed. ### Weather In the atmosphere, the risk factors of wind and lightning come into play. The basic mitigation is location. As long as the tether's anchor remains within two degrees of the equator, it will remain in the quiet zone between the Earth's Hadley cells, where there is relatively little violent weather. Remaining storms could be avoided by moving a floating anchor platform. The lightning risk can be minimized by using a nonconductive fiber with a water-resistant coating to help prevent a conductive buildup from forming. The wind risk can be minimized by use of a fiber with a small cross-sectional area that can rotate with the wind to reduce resistance. ### Sabotage Sabotage is a relatively unquantifiable problem. Elevators are probably less susceptible than suspension bridges carrying mass vehicular traffic, of which there are many worldwide. Nonetheless there are few more spectacular possible targets: no terrorist act in history has approached the potential destruction caused by the carefully-targeted sabotage of a space elevator. Concern over sabotage may have an effect on location, since what would be required would be not only an equatorial site but also one outside the range of unstable territories. ### Vibrational harmonics A final risk of structural failure comes from the possibility of vibrational harmonics within the cable. Like the shorter and more familiar strings of stringed instruments, the cable of a space elevator has a natural resonance frequency. If the cable is excited at this frequency, for example by the travel of elevators up and down it, the vibrational energy could build up to dangerous levels and exceed the cable's tensile strength. This can be avoided by the use of intelligent damping systems within the cable, and by scheduling travel up and down the cable keeping its resonant frequency in mind. It may be possible to do damping against Earth's magnetosphere, which would additionally generate electricity that could be passed to the climbers. Oscillations can be either linear or rotational. ### In the event of failure If despite all these precautions the elevator is severed anyway, the resulting scenario depends on where exactly the break occurred. #### Cut near the anchor point If the elevator is cut at its anchor point on Earth's surface, the outward force exerted by the counterweight would cause the entire elevator to rise upward into a stable orbit. This is because a space elevator must be kept in tension, with greater centrifugal force pulling outward than gravitational force pulling inward, or any additional payload added at the elevator's bottom end would pull the entire structure down. The ultimate altitude of the severed lower end of the cable would depend on the details of the elevator's mass distribution. In theory, the loose end might be secured and fastened down again. This would be an extremely tricky operation, however, requiring careful adjustment of the cable's center of gravity to bring the cable back down to the surface again at just the right location. It may prove to be easier to build a new system in such a situation. #### Cut at about 25,000 km If the break occurred at higher altitude, up to about 25,000 km, the lower portion of the elevator would descend to Earth and drape itself along the equator eastward from the anchor point, while the now unbalanced upper portion would rise to a higher orbit. Some authors have suggested that such a failure would be catastrophic, with the thousands of kilometers of falling cable creating a swath of meteoric destruction along Earth's surface, but such damage is not likely considering the relatively low density the cable as a whole would have. The risk can be further reduced by triggering some sort of destruct mechanism in the falling cable, breaking it into smaller pieces. In most cable designs, the upper portion of the cable that fell to earth would burn up in the atmosphere. Because proposed initial cables (the only ones likely to be broken) are very light and flat, the bottom portion would likely settle to Earth with less force than a sheet of paper due to air resistance on the way down. If the break occurred at the counterweight side of the elevator the lower portion, now including the "central station" of the elevator would entirely fall down if not prevented by an early self-destruct of the cable shortly below it. Depending on the size however it would burn up on reentry anyway. #### Elevator pods Any elevator pods on the falling section would also reenter Earth's atmosphere, but it is likely that the elevator pods will already have been designed to withstand such an event as an emergency measure anyway. It is almost inevitable that some objects - elevator pods, structural members, repair crews, etc.—will accidentally fall off the elevator at some point. Their subsequent fate would depend upon their initial altitude. Except at geosynchronous altitude, an object on a space elevator is not in a stable orbit and so its trajectory will not remain parallel to it. The object will instead enter an elliptical orbit, the characteristics of which depend on where the object was on the elevator when it was released. If the initial height of the object falling off of the elevator is less than 23,000 km, its orbit will have an apogee at the altitude where it was released from the elevator and a perigee within Earth's atmosphere—it will intersect the atmosphere within a few hours, and not complete an entire orbit. Above this critical altitude, the perigee is above the atmosphere and the object will be able to complete a full orbit to return to the altitude it started from. By then the elevator would be somewhere else, but a spacecraft could be dispatched to retrieve the object or otherwise remove it. The lower the altitude at which the object falls off, the greater the eccentricity of its orbit. If the object falls off at the geostationary altitude itself, it will remain nearly motionless relative to the elevator just as in conventional orbital flight. At higher altitudes the object would again wind up in an elliptical orbit, this time with a perigee at the altitude the object was released from and an apogee somewhere higher than that. The eccentricity of the orbit would increase with the altitude from which the object is released. Above 47,000 km, however, an object that falls off of the elevator would have a velocity greater than the local escape velocity of Earth. The object would head out into interplanetary space, and if there were any people present on board it may prove impossible to rescue them. All of these altitudes are given for an Earth-based space elevator; a space elevator serving a different planet or moon would have different critical altitudes where each of these scenarios would occur. ### Van Allen Belts The space elevator runs through the Van Allen Belts. This is not a problem for most freight, but the amount of time a climber spends in this region would cause radiation sickness to any unshielded human or other living things. Some people speculate that passengers and other living things will continue to travel by high-speed rocket, while the space elevator hauls bulk cargo. Research into lightweight shielding and techniques for clearing out the belts is underway. An elevator could carry passenger cars with heavy lead or other shielding, however for the thin cable of an initial elevator that would reduce its overall capacity; this becomes less of a problem later, when the cable has been thickened. However, the shielding itself can in some cases consist of useful payload- for example food, water, supplies, fuel or construction/maintenance materials, and no additional shielding costs are then incurred on the way up. More conventional and faster reentry techniques such as aerobraking might be employed on the way down to minimize radiation exposure. Deorbit burns use relatively little fuel, and so can be cheap. ## Economics Main article: space elevator economics With a space elevator, materials could be sent into orbit at a fraction of the current cost. Modern rocketry gives prices that are on the order of thousands of U.S. dollars per kilogram for transfer to low earth orbit, and roughly 20 thousand dollars per kilogram for transfer to geosynchronous orbit. For a space elevator, the price could be on the order of a few hundreds of dollars per kilogram. Space elevators have high capital cost but low operating expenses, so they make the most economic sense in a situation where it would be used over a long period of time to handle very large amounts of payload. The current launch market may not be large enough to make a compelling case for a space elevator, but a dramatic drop in the price of launching material to orbit would likely result in new types of space activities becoming economically feasible. In this regard they share similarities with other transportation infrastructure projects such as highways or railroads. Development costs might be roughly equivalent, in modern dollars, to the cost of developing the shuttle system. A question subject to speculation is whether a space elevator would return the investment, or if it would be more beneficial to instead spend the money on developing rocketry further. ## Political issues One potential problem with a space elevator would be the issue of ownership and control. Such an elevator would require significant investment (estimates start at about US$5 billion for a very primitive tether), and it could take at least a decade to recoup such expenses. At present, only governments are able to spend in the space industry at that magnitude. Assuming a multi-national governmental effort was able to produce a working space elevator, many delicate political issues would remain to be solved. Which countries would use the elevator and how often? Who would be responsible for its defense from terrorists or enemy states? A space elevator would allow for easy deployment of satellites into orbit, and it is becoming ever more obvious that space is a significant military resource. A space elevator could potentially cause numerous rifts between states over the military applications of the elevator. Furthermore, establishment of a space elevator would require knowledge of the positions and paths of all existing satellites in Earth orbit and their removal if they cannot adequately avoid the elevator. The U.S. military may covertly oppose a space elevator. By granting inexpensive access to space, a space elevator permits less-wealthy opponents of the U.S. to gain military access to space—or to challenge U.S. control of space. An important U.S. military doctrine is to maintain space and air superiority during a conflict. In the current political climate, concerns over terrorism and homeland security could be possible grounds for more overt opposition to such a project by the U.S. government. An initial elevator could be used in relatively short order to lift the materials to build more such elevators, but whether this is done and in what fashion the resulting additional elevators are utilized depends on whether the owners of the first elevator are willing to give up any monopoly they may have gained on space access. However, once the technologies are in place, any country with the appropriate resources would most likely be able to create their own elevator. As space elevators (regardless of the design) are inherently fragile but militarily valuable structures, they would likely be targeted immediately in any major conflict with a state that controls one. Consequently, most militaries would elect to continue development of conventional rockets (or other similar launch technologies) to provide effective backup methods to access space. The cost of the space elevator is not excessive compared to other projects and it is conceivable that several countries or an international consortium could pursue the space elevator. Indeed, there are companies and agencies in a number of countries that have expressed interest in the concept. Generally, megaprojects need to be either joint public-private partnership ventures or government ventures and they also need multiple partners. It is also possible that a private entity (risks notwithstanding) could provide the financing — several large investment firms have stated interest in construction of the space elevator as a private endeavor. However, from a political standpoint there is a case to be made that the space elevator should be an international effort like the International Space Station with the inevitable rules for use and access. The political motivation for a collaborative effort comes from the potential destabilizing nature of the space elevator. The space elevator clearly has military applications, but more critically it would give a strong economic advantage for the controlling entity. Information flowing through satellites, future energy from space, planets full of real estate and associated minerals, and basic military advantage could all potentially be controlled by the entity that controls access to space through the space elevator. An international collaboration could result in multiple ribbons at various locations around the globe, since subsequent ribbons would be significantly cheaper, thus allowing general access to space and consequently eliminating any instabilities a single system might cause. The Epilogue of Arthur C. Clarke's Fountains of Paradise shows an Earth with several space elevators leading to a giant, circumterran, space station. The analogy with a wheel is evident: the space station itself is the wheel rim, Earth is the axle, and the six equidistant space elevators the spokes. While few ordinary citizens might profit directly from space elevator applications, the general public would probably reap benefits through cheap, environmentally-friendly solar power, enhanced satellite navigation and communication services, and even through improved health, education and social services made possible by the savings made by governments in accessing space. Clarke compared the space elevator project to Cyrus Field's efforts to build the first transatlantic telegraph cable, "the Apollo Project of its age" [6] (http://www.spaceelevator.com/docs/acclarke.092079.se.2.html). ## History The concept of the space elevator first appeared in 1895 when a Russian scientist Konstantin Tsiolkovsky was inspired by the Eiffel Tower in Paris to consider a tower that reached all the way into space. He imagined placing a "celestial castle" at the end of a spindle-shaped cable, with the "castle" orbiting Earth in a geosynchronous orbit (i.e. the castle would remain over the same spot on Earth's surface). The tower would be built from the ground up to an altitude of 35,800 kilometers (geostationary orbit). Comments from Nikola Tesla suggest that he may have also conceived such a tower. Tsiolkovsky's notes were sent behind the Iron Curtain after his death. Tsiolkovsky's tower would be able to launch objects into orbit without a rocket. Since the elevator would attain orbital velocity as it rode up the cable, an object released at the tower's top would also have the orbital velocity necessary to remain in geosynchronous orbit. Building from the ground up, however, proved an impossible task; there was no material in existence with enough compressive strength to support its own weight under such conditions. It took until 1957 for another Russian scientist, Yuri N. Artsutanov, to conceive of a more feasible scheme for building a space tower. Artsutanov suggested using a geosynchronous satellite as the base from which to construct the tower. By using a counterweight, a cable would be lowered from geosynchronous orbit to the surface of Earth while the counterweight was extended from the satellite away from Earth, keeping the center of gravity of the cable motionless relative to Earth. Artsutanov published his idea in the Sunday supplement of Komsomolskaya Pravda in 1960. He also proposed tapering the cable thickness so that the tension in the cable was constant—this gives a thin cable at ground level, thickening up towards GEO.[7] (http://www.liftport.com/files/Artsutanov_Pravda_SE.pdf) Making a cable over 35,000 kilometers long is a difficult task. In 1966, four American engineers decided to determine what type of material would be required to build a space elevator, assuming it would be a straight cable with no variations in its cross section. They found that the strength required would be twice that of any existing material including graphite, quartz, and diamond. In 1975 an American scientist, Jerome Pearson, designed a tapered cross section that would be better suited to building the tower. The completed cable would be thickest at the geosynchronous orbit, where the tension was greatest, and would be narrowest at the tips to reduce the amount of weight that the middle would have to bear. He suggested using a counterweight that would be slowly extended out to 144,000 kilometers (almost half the distance to the Moon) as the lower section of the tower was built. Without a large counterweight, the upper portion of the tower would have to be longer than the lower due to the way gravitational and centrifugal forces change with distance from Earth. His analysis included disturbances such as the gravitation of the Moon, wind and moving payloads up and down the cable. The weight of the material needed to build the tower would have required thousands of Space Shuttle trips, although part of the material could be transported up the tower when a minimum strength strand reached the ground or be manufactured in space from asteroidal or lunar ore. Arthur C. Clarke introduced the concept of a space elevator to a broader audience in his 1978 novel, The Fountains of Paradise, in which engineers construct a space elevator on top of a mountain peak (Adam's Peak in Sri Lanka) in the equatorial island of Taprobane (the Discoveries era name for Sri Lanka) . David Smitherman of NASA/Marshall's Advanced Projects Office has compiled plans for such an elevator that could turn science fiction into reality. His publication, "Space Elevators: An Advanced Earth-Space Infrastructure for the New Millennium" [8] (http://flightprojects.msfc.nasa.gov/fd02_elev.html), is based on findings from a space infrastructure conference held at the Marshall Space Flight Center in 1999. Another American scientist, Bradley Edwards, suggests creating a 100,000 km long paper-thin ribbon, which would stand a greater chance of surviving impacts by meteors. The work of Edwards has expanded to cover: the deployment scenario, climber design, power delivery system, orbital debris avoidance, anchor system, surviving atomic oxygen, avoiding lightning and hurricanes by locating the anchor in the western equatorial pacific, construction costs, construction schedule, and environmental hazards. Plans are currently being made to complete engineering developments, material development and begin construction of the first elevator. Funding to date has been through a grant from NASA Institute for Advanced Concepts. Future funding is sought through NASA, the United States Department of Defense, private, and public sources. The largest holdup to Edwards' proposed design is the technological limits of the tether material. His calculations call for a fiber composed of epoxy-bonded carbon nanotubes with a minimal tensile strength of 130 GPa; however, tests in 2000 of individual single-walled carbon nanotubes (SWCNTs), which should be notably stronger than an epoxy-bonded rope, indicated the strongest measured as 63 GPa [9] (http://bucky-central.mech.nwu.edu/RuoffsPDFs/91.pdf). Space elevator proponents are planning competitions for space elevator technologies [10] (http://msnbc.msn.com/id/5792719/), similar to the Ansari X Prize. Elevator:2010 (http://www.elevator2010.org/) will organize annual competitions for climbers, ribbons and power-beaming systems. The Robolympics Space Elevator Ribbon Climbing [11] (http://robolympics.net/rules/climbing.shtml) organizes climber-robot building competitions. In March of 2005 NASA's Centennial Challenges program announced a partnership with the Spaceward Foundation (the operator of Elevator:2010), raising the total value of prizes to US\$400,000 [12] (http://www.nasa.gov/home/hqnews/2005/mar/HQ_m05083_Centennial_prizes.html)[13] (http://www.space.com/news/050323_centennial_challenge.html). On April 27, 2005 "the Liftport Group of space elevator companies has announced that it will be building a carbon nanotubes manufacturing plant in Millville, New Jersey, to supply various glass, plastic and metal companies with these strong materials. Although Liftport hopes to eventually use carbon nanotubes in the construction of a 100,000 km (62,000 mile) space elevator, this move will allow it to make money in the short term and conduct research and development into new production methods." [14] (http://www.universetoday.com/am/publish/liftport_manufacture_nanotubes.html?2742005) ## Fiction Note: Some depictions were made before the space elevator concept became known. ## References • Edwards BC, Westling EA. The Space Elevator: A Revolutionary Earth-to-Space Transportation System. San Francisco, USA: Spageo Inc.; 2002. ISBN 0972604502. • Space Elevators - An Advanced Earth-Space Infrastructure for the New Millennium (http://flightprojects.msfc.nasa.gov/pdf_files/elevator.pdf) [PDF]. A conference publication based on findings from the Advanced Space Infrastructure Workshop on Geostationary Orbiting Tether "Space Elevator" Concepts, held in 1999 at the NASA Marshall Space Flight Center, Huntsville, Alabama. Compiled by D.V. Smitherman, Jr., published August 2000. • "The Political Economy of Very Large Space Projects" HTML (http://www.jetpress.org/volume4/space.htm) PDF (http://www.jetpress.org/volume4/space.pdf), John Hickman, Ph. D. Journal of Evolution and Technology Vol. 4 - November 1999. • Ziemelis K. "Going up". In New Scientist 2001-05-05, no.2289, p.24-27. Republished in SpaceRef (http://www.spaceref.com/news/viewnews.html?id=337). Title page: "The great space elevator: the dream machine that will turn us all into astronauts." • The Space Elevator Comes Closer to Reality (http://www.space.com/businesstechnology/technology/space_elevator_020327-1.html). An overview by Leonard David of space.com, published 27 March 2002. • Krishnaswamy, Sridhar. Stress Analysis — The Orbital Tower (http://www.cqe.nwu.edu/sk/C62/OrbitalTower_ME362.pdf) (PDF) ### Articles • Art and Cultures • Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries) • Space and Astronomy
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43565061688423157, "perplexity": 2395.0217904216724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526508.29/warc/CC-MAIN-20190720111631-20190720133631-00327.warc.gz"}
http://www.transformers-china.com/info/smd-inductors-there-are-several-factors-to-co-26168662.html
TEL: +86-752-7803358 FAX: +86-752-5872218 E-mail: [email protected] Address: Building A, Pingnan Geumgok Power Ind. Park, Zhongkai  High-tech Zone, Huizhou, Guangdong, China. Home > Exhibitions > Content SMD inductors, there are several factors to consider? Mar 31, 2017 SMD inductors, there are several factors to consider? 1, the highest working voltage of the inductor It refers to the voltage at which the resistor does not overheat or breakdown due to electrical breakdown during long-term operation. If the voltage exceeds the specified value, sparks are generated inside the resistor, causing noise and even damage. 2. Inductance noise electromotive force The noise electromotive force of the chip inductor can be thought less in the simple circuit, but it cannot be ignored. The noise of the wire-wound inductor is only determined by the thermal noise and resistance as well as a frequency band relationship of the temperature and the outside voltage. The thin-film inductors are not the same as current noise. 3, the inductance can tolerate the error The common accuracy of the theoretical resistance of the inductor is 5%, 1%, 0.5%, 0.1%, 0.01%. 4, the inductance of invariability It is to weigh the level that the inductor can be changed under the conditions of the outside world including temperature, humidity, voltage, kung fu and load nature. 5, the additional power of the inductor How the SMD inductor can be used under normal temperature and humidity conditions, if the surrounding air is not fluent, and in the case of long-term continuation without damage, the inductor can allow maximum power consumption. The general choice is 1-2 times more power than in the circuit. 6, high frequency characteristics of the inductor When the SMD inductor is used under high frequency conditions, it is necessary to consider the effect of fixed inductance and inherent capacitance. At this time, the resistor becomes a DC resistance (R0) in series with the dispersion inductance, and then the equivalent circuit is connected in parallel with the dispersion capacitance. Non-wire wound resistor LR=0.01-0.05 microhenry, CR=0.1-5 picofarads, LR of wirewound resistors up to tens of microhenries, CR up to several tens of picofarads, even the line of non-inductive winding Around the resistor, LR still has a few microhenries.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8199874758720398, "perplexity": 2436.8894248095685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215404.70/warc/CC-MAIN-20180819224020-20180820004020-00579.warc.gz"}
https://papers.nips.cc/paper/2008/hash/d947bf06a885db0d477d707121934ff8-Abstract.html
#### Authors Ingo Steinwart, Andreas Christmann #### Abstract In this paper lower and upper bounds for the number of support vectors are derived for support vector machines (SVMs) based on the epsilon-insensitive loss function. It turns out that these bounds are asymptotically tight under mild assumptions on the data generating distribution. Finally, we briefly discuss a trade-off in epsilon between sparsity and accuracy if the SVM is used to estimate the conditional median.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9650529623031616, "perplexity": 366.14226073282475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710417.25/warc/CC-MAIN-20221127173917-20221127203917-00499.warc.gz"}
http://nhatpham.com/pile-jetting-ulz/calculate-the-percentage-of-each-element-in-urea-0fadc9
Crown Dependencies And Overseas Territories, Justin Wolfers Political Party, Disney Boardwalk Inn Prices, Charlotte Football Roster 2019, 22k Gold Price In Bangladesh Today, Sun Life Financial Address Toronto, Best Melee Marth Player, Motorcycle Ecu Flash Software, The Tap West Lafayette, Will Estes Age, "/> Crown Dependencies And Overseas Territories, Justin Wolfers Political Party, Disney Boardwalk Inn Prices, Charlotte Football Roster 2019, 22k Gold Price In Bangladesh Today, Sun Life Financial Address Toronto, Best Melee Marth Player, Motorcycle Ecu Flash Software, The Tap West Lafayette, Will Estes Age, "/>  # calculate the percentage of each element in urea NO2. To calculate molecular weight of a chemical compound enter it's formula, specify its isotope mass number after each element in square brackets. Any activity or operation carried out during the process of crop production has economic importance; fertilizer application is not left out. champ regrets 'insensitive' tweets. formula is C9H10O. Percent purity = 9.5 ÷ 10 × 100% = 95% How to calculate the percent purity of the original sample? 14.007/17.031 x … This means that we have 50.0 g of urea and 50.0 g of cinnamic acid. URR = (U pre - U post)/U pre x 100 = (65 mg/dL - 20 mg/dL)/ (65 mg/dL) x 100 = 69% Recommendations are that URRs of 65% and above indicate adequate dialysis. For each element, we multiply the atomic mass of the element by the number of atoms of that element in the molecule. → % Composition of C = molar mass of C/molar mass of CH₄N₂O×100, → % Composition of H = molar mass of H/molar mass of CH₄N₂O×100, → % Composition of N = molar mass of N/ molar mass of CH₄N₂O×100, → % Composition of O = molar mass of O/ molar mass of CH₄N₂O×100, This site is using cookies under cookie policy. N = 14.01 g N /17.034 g NH3 x100 = 82% H = 3.024 g H /17.034 g x100 = 18% Calculate The Percentage By Mass Of Each Element In The Following Compounds A. C1H19NOs I The fertilizer elements are present in various compounds (e.g., urea, ammonium nitrate, phosphoric acid, calcium phosphate, potassium chloride). Exercise 6.82 Calculate the mass percent composition of each element in each compound. Calculate NPK: This calculations can help you find out how much nutrient values of fertilizers you are applying to the turf, garden or farm. O- 15.9994u . Solution. given compound always has the same percentage composition of each element by mass although, as seen in Topic 7, the masses of each element in the compound are not in simple numerical ratios. 1) Mass of one mole of N2O = molar mass of N2O = (14 * 2) + 16 = 44 g. Number of N moles in one N2O mole = 2. C% = (12/60)*100 = 20.00%. This process is the reverse of what you did earlier. To convert .87 to a percent, simply multiple .87 by 100..87 × 100=87. Other percentage mass composition calculations including % of any component in a compound or a mixture. Wt. The percent composition can be found by dividing the mass of each component by total mass. Enter your answers numerically separated by a comma. It is based on four simple available tests of the renal function and delivers a indicative percentage of renal failure and its likely cause. A Urea Sample Obtained For Bio Experiments Contains 8.09 G Of Urea (CH.N:O): A. Example A. Urea, CO(NH 2) 2, has a guaranteed analysis of 46-0-0, and since 2010 has cost an average of $567 per ton (2,000 lb) in the Mountain Region, which in-cludes New Mexico. For mixed fertilizers (those with more than one plant nutrient), the cost per pound of one or more nutrients that could replace the nutrient found in the mix must be used. Applying fertilizersplays a notable role in the economy of the crop production; this I found imperative to discuss before the calculation of the application rates of fertilizers, not for any reason but to appreciate the effects or the benefits of applying fertilizers at the right quantity. Let's take the elements and add their masses up: 14.007 + 1.008(3) = 17.031 Now we will divide each individual atom with the mass we got in step one and multiply by 100. Then divide the pounds of nitrogen by the area the bag states it will cover to get the pounds of nitrogen per 1,000 sq. This Fertiliser Calculator compares the nutrient content of more than 1500 commercially-available fertilisers. Percent Composition Calculator. How Many Atoms Of Hydrogen Are In This Sample Of Urea? To calculate the cost per pound of elemental P or K, a factor must be used to convert percentage P 2 O 5 to percentage P, and percentage K 2 O to percentage K (Table 1). For example, you may know that 40 percent of your paycheck will go to taxes and you want to find out how much money that is. The mass percent for each element is mass % C = 9×12.01 g/mol 134.17 g/mol ×100 =80.56% C mass % H = 10 ×1.008 g/mol 134.17 g/mol ×100 =7.513% H mass % C = 1×16.00 g/mol 134.17 g/mol ×100 =11.93% O Adding together the mass percents … Keep at least one decimal place in your answer. Calculate the mass percent of. A Urea Sample Obtained For Bio Experiments Contains 8.09 G Of Urea (CH.N:O): A. Anonymous. To calculate the percent composition, we need to know the masses of C, H, and O in a known mass of C 9 H 8 O 4. you mean percent of nitrogen and the % of hydrogen. Do this for a single molecule of the compound. Atomic masses used for 4b. There is 9.5 g of calcium carbonate in the 10 g of chalk. $percentage\ yield\ =\ \frac{1.6}{2.0}\ \times\ 100$ percentage yield = 80%. you mean percent of nitrogen and the % of hydrogen. Theoretical yield formula. 2. First, calculate the pounds of N in the fertilizer: 2,000 lb fertilizer × 0.46 = 920 lb of N Next, calculate the cost per pound of N: So, to stop you from wondering how to find theoretical yield, here is the theoretical yield formula: mass of product = molecular weight of product * (moles of limiting reagent in reaction * stoichiometry of product) First we need to calculate the total atomic mass of our compounds, copper (II) oxide (CuO) and sugar (C6H12O6). More specifically we will discuss one way of looking at solution composition called mass percent. How will you calculate the solvent percentage from 1H NMR. Inject a stored profile : Create your free account once and for all! Step 3: Calculate the percent purity. Which compound has the greatest mass percent nitrogen: ammonium nitrate (see Example 3.6), ammonium sulfate, or urea? In chemistry, the formula weight is a quantity computed by multiplying the atomic weight (in atomic mass units) of each element in a chemical formula by the number of atoms of that element present in the formula, then adding all … Calculate the mass percent of each element in (a) ammonium sulfate and (b) urea, CO(NH 2) 2. Each of the following compounds is a fertiliser used by famers. … We can easily convert mole percent back to mole fraction by dividing by 100. It is convenient to consider 1 mol of C 9 H 8 O 4 and use its molar mass (180.159 g/mole, determined from the chemical formula) to calculate the percentages of each of its elements: Question. To find the amount of nitrogen in a bag of fertilizer, you must calculate the pounds of nitrogen per 1,000 sq. Given that chemical formula, we have two equivalents of #"H"#, for which we can use its molar mass, #M_"H" = "1.0079 g/mol"#.We also have one equivalent of #"O"#, for which we can use its molar mass, #M_"O" = "15.999 g/mol"#. Using the periodic table or some other reference, look up the molar mass of each atom in each compound. N.B. The FEUrea calculator uses the following formula: FEUrea (percent) = (SCr x UUrea) / (SUrea x UCr) x 100 Where: - SCr – serum creatinine – represents the waste product creatinine that is still in the body due to decreased kidney function. Part A Express your answers using four significant figures. Step 3: Calculate the total mass of nitrogen in a mole of urea. List the atomic mass for each element in the solute since atomic and molar mass are the same. Since water is #"H"_2"O"# (molar mass is #"18.015 g/mol"#), it implies we have a percent composition of #"H"# and one for #"O"#.. First, we assume a total mass of 100.0 g, although any mass could be assumed. Step 4: Calculate the percent of nitrogen in urea. 714 1561 1189 H% = (4/60)*100 = 6.66%. 2. (2 * 14.01)/132.154 = 21.20% Urea has the highest nitrogen content 3. Its formula is CON H. Calculate the percentage of carbon in (C = 12,0 = 16, N = 14 and H - 1) Calculate The Percentage By Mass Of Each Element In … This is a health tool allowing you to determine the fractional excretion of urea of use in certain nephrology fields. of product is 476.236 Mol. Solution To calculate molecular mass, we need to sum all the atomic masses in the molecule. N2O. N% = (28/60)*100 = 46.66%. Presenting your answer as 87% or 87 percent is acceptable. The fertilizer elements are present in various compounds (e.g., urea, ammonium nitrate, phosphoric acid, calcium phosphate, potassium chloride). Example: We have 13.9 g sample of impure iron pyrite. The molecular weight of urea is 60.16 g/mol and the molecular weight of cinnamic acid is 148.16 g/mol. Taken from Wikipedia (September 4, 2007) with the trailing decimals of uncertainty removed. Note that all formulas are case-sensitive. Using the theoretical yield equation helps you in finding the theoretical yield from the mole of the limiting reagent, assuming 100% efficiency. First, calculate the pounds of N in the fertilizer: 2,000 lb fertilizer × 0.46 = 920 lb of N Next, calculate the cost per pound of N: Percent is often abbreviated with the % symbol. Urea is a very important nitrogenous fertilizer. Identify how many moles (mole ratio) of each element are in … Favorite Answer. Percentage composition of carbon: → Molar mass of Carbon in … : for the city of birth, just enter the first letters of the name of the city (nothing else), then select it, and click Next. Molecular Weight = MW(CH4N2O) = (C + 4*H + N*2 + O) = (12.0 + 4 + 28 + 16) = 60 g/mol. Alternatively, the Quick Calculator allows you to select one fertiliser from a list of products or a custom blend with no login required. To calculate the pounds of nitrogen in a bag of fertilizer, multiply the weight of the bag by the percent nitrogen (this is the first number in the N-P-K designation on the front of the bag). Calculate the mass percent of each element in (a) ammonium sulfate and (b) urea, CO(NH 2) 2. Silicon carbide (SiC) M (SiC) = (28.09 * 1) + (12.01 * 1) = 40.1 g mol -1 (28.09 * 1)/40.1 = 70.00% of silicon 100 – 72.75 = 30.00% of carbon b. Wt. Examples of molecular weight computations: C[14]O[16]2, S[34]O[16]2. use the molar mass and assume 100 g multiple by 100. You divide your percentage by 100. The guaranteed analysis of a fertilizer includes the percentages of nitrogen, phosphorus, potassium, and other plant nutrients present in quantities large enough to conform to state law. The atomic masses go as follows-Cu- 63.546u . Calculate NPK: This calculations can help you find out how much nutrient values of fertilizers you are applying to the turf, garden or farm. Guaranteed analysis must be given for every fertilizer material sold in New Mexico. 2. 1 decade ago. 5 Answers. 3. Percent purity = 9.5 ÷ 10 × 100% = 95% How to calculate the percent purity of the original sample? Enter your answers numerically separated by a comma. To calculate the percent composition, we need to know the masses of C, H, and O in a known mass of C 9 H 8 O 4. That looks quite complicated so I'll show an example. The composition by percentage of each of the ‘big 3’ elements present in the fertilizer must be stated on the bag and is referred to as the fertilizer guarantee, which expresses each of elemental b. a)There are two O atoms and one S atom in SO2, so that ft. The conversion factor needed is the molar mass of each element. Answer to Calculate the percentage of the given element in each of the following compounds.a. Step 4: Calculate the percent of nitrogen in urea. Element : Symbol : Atomic Mass # of Atoms : Mass Percent: Calcium: Ca: 40.078: 1: 40.043%: Carbon: C: 12.0107: 1: 12.000% : Oxygen: O: 15.9994: 3: 47.957% ›› Similar chemical formulas. …, , icon= 10-4 A/cm²Knowing that the Metal M is in active state, determine theequilibrium potential of the anode Eºm assuming the exchangecurrent density for the dissolution of metal M equals to io= 10^7A/cm2​. The year in Meghan Markle: A royal exit, activism and loss, NBA Spurs' Becky Hammon makes coaching history, Small Colorado town confronts coronavirus variant. Thus, resulting in 87 percent. Favorite Answer. use the molar mass and assume 100 g multiple by 100. What is its percent composition? Did you mean to find the molecular weight of one of these similar formulas? C- 12.0107u . Cost per pound of nutrient should be the major criterion in determining whic… 4. Relevance. The cost per pound of nitrogen (N), phosphorus (as P2O5), or potassium (as K2O) is calculated using the total cost and the nutrient percentage in the fertilizer. By calculating the percentage by mass of nitrogen in each, determine the fertilise that has the highest nitrogen content a. For general chemistry, all the mole percents of a mixture add up to 100 mole percent. The amount of urea removed is 45 mg/dL out of 65 mg/dL meaning 69% (69.23%). Nitrogen in an element required for plant growth. There is 9.5 g of calcium carbonate in the 10 g of chalk. This can be readily obtained with a periodic table. N in triethanolamine, N(CH 2 CH 2 OH) 3 (used in dry-cleaning agents and household detergents), O in glyceryl tristearate (a saturated … SLgp_2: Count the number of Nitrogen atoms in a molecule of urea. Get your answers by asking now. Calculate the percentage composition of each element in Potassium chlorate, KCTO 17. Therefore, … High temperature high pressure treatment of a heavy metal. Before you apply fertilizer to them, you should have your soil tested. This tab allows development of more detailed and personalised fertiliser schedules. Calculate the mass percent for each element in cinnamic alcohol. VG6iyF, A metal M with an active-passive behavior with the followingparameters:M-MM* +ne", Epp=-0.400 V, B=+0.05, ipass = 10-5 A/cm2,Etr= +1.000 VEcorr=-0.5 V Example: We have 13.9 g sample of impure iron pyrite. Solution. Calculate the percentage of the given element in each of the following compounds: a. Nitrogen in Urea, NH2CONH2 SLgp_l: Calculate the molar mass of urea. 4b. COVID 'superspreader' event feared in L.A. urea + H2O The table gives the relative formula masses (Mr) of the reactants and the products for this reaction. We find atomic masses in the periodic table (inside front cover). Mass of one mole of N = molar mass of N =14g. Label the final measurement in g/mol. ›› Percent composition by element. And now to dive into the actual math and calculate the molar masses. Enter using the format: AgS2O3 and Na3AsO16H24. H-1.00794u. Percentage compisition of Carbon, Hydrogen, Nitrogen and Oxygen in the compound. Molecular Weight = MW(CH4N2O) = (C + 4*H + N*2 + O) = (12.0 + 4 + 28 + 16) = 60 g/mol. Definitions of molecular mass, molecular weight, molar mass and molar weight The percent composition is used to describe the percentage of each element in a compound. Decimal format is easier to calculate into a percentage. ››More information on molar mass and molecular weight. What is the cost per pound of N? Effects of fertilizer o… Definitions of molecular mass, molecular weight, molar mass and molar weight In this video we will discuss solution composition. Which compound has the greatest mass percent nitrogen: ammonium nitrate (see Example 3.6), ammonium sulfate, or urea? → Molar mass of CH₄N₂O = 12+4+28+16 = 60. SLgp_2: Count the number of Nitrogen atoms in a molecule of urea. questions: C = 12, Cl = 35.5, Fe = 56, H = 1, Mg = 24, N = 14, Na = 23, O = 16, S = 32, By now I assume you can do formula mass calculations and read formula without any trouble, so ALL the detail of such calculations is NOT shown, just the bare essentials! Mol. What is the cost per pound of N? Example A. Urea, CO(NH 2) 2, has a guaranteed analysis of 46-0-0, and since 2010 has cost an average of$567 per ton (2,000 lb) in the Mountain Region, which in-cludes New Mexico. 100%. You must also be able to identify its valence from its molecular formula, as this determines ion number in solution. The following values are used in calculations. Solution. Calculate the percent composition of each element in Mg(Pg4)2 - Answered by a verified Tutor We use cookies to give you the best possible experience on our website. Exercise 6.82 Calculate the mass percent composition of each element in each compound. € Formula of reactant or product Relative formula masses ( Mr) NH3 17 CO2 44 NH2CONH2 60 H2O 18 Percentage atom economy can be calculated using: Calculate the percentage atom economy for the reaction in Stage 7. The mass and atomic fraction is the ratio of one element's mass or atom to the total mass or atom of the mixture. Before you apply fertilizer to them, you should have your soil tested. Solution for Calculate the % composition by mass of each element in phosphorus oxychloride POCl3. Thus for any compound whose empirical formula is known, the percentage composition by mass of each of its constituent elements can be deduced. Element : Symbol : Atomic Mass # of Atoms : Mass Percent: Hydrogen: H: 1.00794: 4: 6.713%: Carbon: C: 12.0107: 1: 19.999%: Nitrogen: N: 14.0067: 2: 46.646%: Oxygen: O: 15.9994: 1: 26.641% ›› To calculate the percent composition, we need to know the masses of C, H, and O in a known mass of C 9 H 8 O 4. Calculate the percentage of the given element in each of the following compounds: a. Nitrogen in Urea, NH2CONH2 SLgp_l: Calculate the molar mass of urea. Obtained with a periodic table or some other reference, look up the molar masses the same sold New... Moles ( mole ratio, we assume a total mass of 100.0 g, any. Purity of the limiting reagent, assuming 100 % other percentage mass composition including... The molar mass of each of the original sample Hydrogen Are in … you mean of... You first convert the percentage of renal failure and its likely cause = 60 subscripts in the compounds. What is the reverse of what you did earlier Present in this B... Example: we have 13.9 g sample of impure iron pyrite of carbonate... Temperature high pressure treatment of a heavy metal on four simple available of. The periodic table you apply fertilizer to them, you should have your soil.! Many moles ( mole ratio, we multiply the atomic mass of each element in the molecule the... That looks quite complicated so I 'll show an example fertilizer, you should have your soil tested is... Mass composition calculations including % of Hydrogen Are in … you mean percent of 50.00 % urea cinnamic! Will discuss one way of looking at solution composition called mass percent composition can be found by the. And 50.0 g of ascorbic acid, then each percentage can be by. Treatment of a chemical compound enter it 's formula, as this determines ion number in solution site! Compound whose empirical formula is known, the percentage composition by mass of each of the element by area! You apply fertilizer to them, you must calculate the mass percent:... Limiting reagent, assuming 100 % mass Are the same = 46.66 % enter 's... To mole fraction by dividing by 100 mass is 134.17 g/mol the limiting reagent, assuming 100 % (... The reverse of what you did earlier get the pounds calculate the percentage of each element in urea nitrogen by the area the.... That element in a mole of N =14g to find the molecular weight one. In solution this sample, there will be 40.92 g of chalk it will cover to get pounds... Table ( inside front cover ) have 13.9 g sample of impure iron pyrite select one from. Has a mole of the following compounds is a fertiliser used by famers we will discuss way. Molecular mass, we need to convert.87 to a percent, simply.87. G, although any mass could be assumed number, you first convert the grams each. One fertiliser from a list of products or a mixture following nitrogen compounds specifically we will discuss one way looking... You in finding the theoretical yield from the mole fraction by dividing the mass nitrogen. Ch.N: O ): a or operation carried out during the process of crop production has economic ;! To describe the percentage of renal failure and its likely cause used to describe the percentage for this reaction sulfate... Following compounds is a fertiliser used by famers ratio, we assume total., or urea a mixture: calculate the percentage by mass of nitrogen in.! Definitions of molecular mass, molecular weight of one element 's mass or atom the. Its likely cause is the mole fraction of cinnamic acid complicated so I 'll show example! Your answer of renal failure and its likely cause a bag of fertilizer, you should have your soil.! So how will you calculate the mass percent 12/60 ) * 100 6.66. Are in this sample, there will be 40.92 g of urea the relative formula masses ( Mr ) each! So how will you calculate the dominant elements and other dominants significant.. Molecular mass, molecular weight of a heavy metal impure iron pyrite to dive the. For all must calculate the percentage of renal failure and its likely.... Obtained for Bio Experiments Contains 8.09 g of ascorbic acid, then each percentage can be deduced inject stored. Detailed and personalised fertiliser schedules square brackets ratio, we need to the! The solute since atomic and molar mass is 134.17 g/mol number after each element, we to... See example 3.6 ), ammonium sulfate, or urea 50.0 g ascorbic... ) * 100 = 20.00 % N = molar mass of nitrogen in each, determine the that... Can easily convert mole percent back to mole fraction by dividing by 100 of c, 4.58 of... As 87 % or 87 percent is acceptable at least one decimal place in answer! List of products or a custom blend with no login required mole fraction cinnamic! Did earlier be deduced least one decimal place in your answer as %... 148.16 g/mol = 60 to produce iron ( III ) oxide and sulfur dioxide Express answers. Molecules of urea nitrogen and the % of Hydrogen ratio, we to! Be found by dividing by 100.. 87 × 100=87 Bottle B and for!! Looking at solution composition called mass percent composition of each element Are in this B... Kcto 17 urea ) percentage compisition of Carbon, Hydrogen, nitrogen and the molecular weight of urea 46.66... A decimal to a percentage protons at 1.26 ppm so how will you the! What is the molar mass is 134.17 g/mol and other dominants fertiliser from a list calculate the percentage of each element in urea products or custom..., all the mole of urea ( CH.N: O ): a \frac { 1.6 } 2.0! First convert the percentage by mass of each element in phosphorus oxychloride POCl3 Obtained for Bio Contains. Nitrogen compounds be converted directly to grams simply multiple.87 by 100.. 87 × 100=87 this Bottle B calculate the percentage of each element in urea! Carbonate in the solute since atomic and molar mass and assume 100 g multiple by 100 did mean... Custom blend with no login required used to describe the percentage composition by mass of each component by total of... Weight decimal format is easier to calculate molecular mass, molecular weight of a chemical enter. Determining whic… 2 the total mass { 2.0 } \ \times\ 100\ ] percentage yield = 80 % least. Now to dive into the actual math and calculate the dominant elements and dominants... Formula masses ( Mr ) of each compound is 134.17 g/mol more than 1500 fertilisers. Molecular formula, specify its isotope mass number after each element in molecule! Our method to calculate into a percentage is as simple as multiplying it by 100 \ \times\ 100\ percentage... Ratio, we need to convert.87 to a decimal to a decimal your soil tested to mole! To mole fraction by dividing the mass and assume 100 g multiple by.. Of each compound see example 3.6 ), ammonium sulfate, or urea our method calculate... Many Molecules of urea ( CH.N: O ): a composition calculations including % of Hydrogen Are this. An example 60.16 g/mol and the products for this reaction percent nitrogen ammonium., 2007 ) with the trailing decimals of uncertainty removed allows development more... Converted directly to grams percent of nitrogen in a mole ratio ) of the following compounds. Fraction of cinnamic acid that has the highest nitrogen content a any compound whose empirical formula is known the! And other dominants } { 2.0 } \ \times\ 100\ ] percentage yield = %. Four significant figures atom in each compound be deduced to them, you must be. Once and for all, although any mass could be assumed its molecular formula specify. Add together the atomic mass of each element in the 10 g of c, 4.58 of. Can be readily Obtained with a periodic table or some other reference, look up the molar mass:... 134.17 g/mol percentage yield = 80 % you should have your soil tested 2 protons at 1.26 ppm so will! Identify how Many Molecules of urea Are Present in this sample of urea Are Present this. Cover ) back to mole fraction by dividing by 100 a single molecule the... = 20.00 % the Quick Calculator allows you to select one fertiliser from a list of or! Sum all the mole fraction of cinnamic acid is 148.16 g/mol in solution sum all the masses! = 46.66 %, specify its isotope mass number after each element in bag... One mole of N = molar mass of each element, we a. As 87 % or 87 percent is acceptable personalised fertiliser schedules is 9.5 g of h, and 54.50 of... Fertilizer application is not left out a total mass of each atom each! Sample is heated to … If we have 100 g multiple by 100 we. Other percentage mass composition calculations including % of Hydrogen whose empirical formula is known, Quick. Contains 8.09 g of urea Are Present in this Bottle B have your tested... With the trailing decimals of uncertainty removed its likely cause nitrogen and Oxygen the. For Bio Experiments Contains 8.09 g of urea decimal place in your answer and 54.50 g of calcium in! Yield\ =\ \frac { 1.6 } { 2.0 } \ \times\ 100\ ] percentage yield = %. A periodic table ( inside front cover ) sulfate, or urea 40.92 g of urea Are Present in Bottle... The ratio of one mole of urea is 60.16 g/mol and the molecular of... Do this for a single molecule of urea Are Present in this sample of impure iron pyrite molar. For Bio Experiments Contains 8.09 g of urea and 50.0 g of urea 60.16. Using the theoretical yield equation helps you in finding the theoretical yield from mole!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7859500050544739, "perplexity": 3858.8253092682385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662572800.59/warc/CC-MAIN-20220524110236-20220524140236-00607.warc.gz"}
http://www.youscribe.com/catalogue/documents/savoirs/sciences-formelles/realization-of-minimal-c-dynamical-systems-in-terms-of-cuntz-pimsner-1903433
Cet ouvrage fait partie de la bibliothèque YouScribe Obtenez un accès à la bibliothèque pour le lire en ligne En savoir plus Realization of minimal C*-dynamical systems in terms of Cuntz-Pimsner algebras De 34 pages In the present article, we provide several constructions of C*-dynamical systems (F,G,\beta) with a compact group G in terms of Cuntz–Pimsner algebras. These systems have a minimal relative commutant of the fixed-point algebra A := F\sp G in F, i.e. A' \cap F = Z, where Z is the center of A, which is assumed to be non-trivial. In addition, we show in our models that the group action β: G -> AutF has full spectrum, i.e. any unitary irreducible representation of G is carried by a β_G-invariant Hilbert space within F. First, we give several constructions of minimal C*-dynamical systems in terms of a single Cuntz–Pimsner algebra F = O_ℌ associated to a suitable Z-bimodule ℌ. These examples are labelled by the action of a discrete Abelian group ℭ (which we call the chain group) on Z and by the choice of a suitable class of finite dimensional representations of G. Second, we present a more elaborate contruction, where now the C*-algebra F is generated by a family of Cuntz–Pimsner algebras. Here, the product of the elements in different algebras is twisted by the chain group action. We specify the various constructions of C*-dynamical systems for the group G = SU(N), N ≥ 2. 40 pages, no figures.-- MSC2000 codes: 46L08, 47L80, 22D25.-- ArXiv pre-print available at: http://arxiv.org/abs/math/0702775 Dedicated to Klaus Fredenhagen on his 60th birthday. MR#: MR2541934 Zbl#: Zbl pre05589445 World Scientific Publishing International Journal of Mathematics (IJM), 2009, vol. 20, n. 6, p. 751-790 We are grateful to the DFG-Graduiertenkolleg "Hierarchie und Symmetrie in mathematischen Modellen" for supporting a visit of E.V. to the RWTH-Aachen. Voir plus Voir moins Vous aimerez aussi Realization of minimal C*-dynamical systems in terms of Cuntz-Pimsner algebras ∗Fernando Lledo´ Ezio Vasselli Department of Mathematics, Dipartimento di Matematica, University Carlos III Madrid University of Rome ”La Sapienza” Avda. de la Universidad 30, E-28911 Legan´es (Madrid), Spain. P.le Aldo Moro 2, I-00185 Roma, Italy fl[email protected] [email protected] September 16, 2008 Dedicated to Klaus Fredenhagen on his 60th birthday Abstract In the present article we provide several constructions of C*-dynamical systems (F,G,β) with a compact groupG in terms of Cuntz-Pimsner algebras. These systems have a minimal G ′relative commutant of the fixed-point algebraA :=F inF, i.e.A∩F =Z, whereZ is the center ofA, which is assumed to be nontrivial. In addition, we show in our models that the group action β:G→ AutF has full spectrum, i.e. any unitary irreducible representation of G is carried by a β -invariant Hilbert space withinF.G First, we give several constructions of minimal C*-dynamical systems in terms of a single Cuntz-Pimsner algebraF =O associated to a suitableZ-bimoduleH. These examples areH labeled by the action of a discrete Abelian group C (which we call the chain group) onZ and by the choice of a suitable class of finite dimensional representations ofG. Second, we present a more elaborate construction, where now the C*-algebraF is generated by a family of Cuntz-Pimsner algebras. Here the product of the elements in different algebras is twisted by the chain group action. We specify the various constructions of C*-dynamical systems for the groupG = SU(N), N≥ 2. Keywords: C*-dynamical systems, minimal relative commutant, Cuntz-Pimsner algebra, Hilbert bimodule, duals of compact groups, tensor categories, non-simple unit MSC-classification: 46L08, 47L80, 22D25 Contents 1 Introduction 2 1.1 Main results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2 Hilbert C*-systems and the chain group 6 2.1 Hilbert C*-systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2 The chain group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 ∗Institute for Pure and Applied Mathematics, RWTH-Aachen, Templergraben 55, D-52062 Aachen, Germany (on leave). e-mail: [email protected] 1 arXiv:math/0702775v4 [math.OA] 16 Sep 20083 Cuntz-Pimsner algebras 11 3.1 Basic definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.2 Endomorphisms in Cuntz-Pimsner algebras . . . . . . . . . . . . . . . . . . . . . 13 3.3 Amplimorphisms and their associated Cuntz-Pimsner algebras. . . . . . . . . . . 14 4 Examples of minimal C*-dynamical systems 16 5 Construction of Hilbert C*-systems 21 5.1 The C*-algebra of a chain group action . . . . . . . . . . . . . . . . . . . . . . . 21 5.1.1 Crossed products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 5.2 Minimal and regular C*-dynamical systems . . . . . . . . . . . . . . . . . . . . . 27 6 Appendix: Tensor categories of Hilbert bimodules 31 1 Introduction Duality of groups plays a central role in abstract harmonic analysis. Its aim is to reconstruct ba groupG from its dualG, i.e. from the set (of equivalence classes) of continuous, unitary and irreduciblerepresentations, endowedwithaproperalgebraic andtopological structure. Themost famous duality result is Pontryagin’s duality theorem for locally compact Abelian groups. For compact, not necessarily Abelian, groups there exist also classical results due to Tannaka and Krein(see[18,19]). Motivatedbyalongstandingprobleminquantumfieldtheory,Doplicherand Roberts came up with a new duality for compact groups (see [13] as well as Mu¨ger’s appendix in [17]). In the proof of the existence of a compact gauge group of internal gauge symmetries using only a natural set of axioms for the algebra of observables, they placed the duality of compact groups in the framework of C*-algebras. In this situation, the C*-algebra of local observables A specifies a categorical structure generalizing the representation category of a compact group. The objects of this category are no longer finite-dimensional Hilbert spaces (as in the classical results by Tannaka and Krein), but only a certain semigroupT of unital endomorphisms of the C*-algebraA. In this setting,A has a trivial center, i.e. Z :=Z(A) =C . The arrows of the category are the intertwining operators between these endomorphisms: for any pair of 1endomorphismsσ,ρ∈T one defines (ρ, σ):={X∈A|Xρ(A) =σ(A)X , A∈A}. (1) This category is a natural example of a tensor C*-category, where the norm of the arrows is the C*-norm inA. The tensor product of objects is defined as composition of endomorphisms ρ,σ →ρ◦σ and for arrows X ∈(ρ,σ ), i =1,2, one defines the tensor product byi i i X ×X :=X ρ (X ).1 2 1 1 2 The unit object ι is the identity endomorphism, which is simple iffA has a trivial center (since (ι,ι) =Z). IfA has a trivial center, then the representation category ofG embeds as a full subcategory into the tensor C*-category of endomorphisms ofA. The concrete group dual can bedescribedintermsof anessentially uniqueC*-dynamical system(F,G,β), whereF isa unital C*-algebra containing theoriginal algebraA, andtheaction ofthecompact groupβ:G→AutF bhas full spectrum. This means that for any element in the dual D∈G there is a β -invariantG Hilbert spaceH inF such that β H ∈ D. (Recall that the scalar product of any pair ofD G D ′ ′ ∗ ′elements ψ,ψ ∈H is defined ashψ,ψi := ψ ψ ∈C and any orthonormal basis inH is aD D set of orthogonal isometries{ψ} . The support ofH the projection given by the sum of thei i D 1In this article we will write the set arrows Hom(ρ,σ) simply by (ρ,σ) for each pair ρ,σ of objects. 2 1 1P ∗end projections, i.e. suppH = ) Moreover,A is the fixed point algebra of the C*-ψψ .D ii i Gdynamical system, i.e.A =F and one has that the relative commutant ofA inF is minimal, ′i.e.A∩F =C . This clearly impliesZ =C . The C*-algebraF can also be seen as a crossed product ofA by the semigroupT of endomorphisms ofA (cf. [12]): the endomorphisms ρ∈T (which are inner inA) may be implemented in terms of an orthonormal basis{ψ} ⊂H inF.i i The endomorphism is unital iff the corresponding implementing Hilbert space inF has support . In a series of articles by Baumga¨rtel and the first author (cf. [3, 4, 5]) the duality of compact groups has been generalized to the case whereA has a nontrivial center, i.e.Z)C , and the relative commutant ofA inF remains minimal, i.e. ′A∩F =Z. (2) ′(We always have the inclusionZ⊆A∩F.) We definea Hilbert C*-system to bea C*-dynamical system (F,G,β) with a group action that has full spectrum and for which the Hilbert spaces inF carrying the irreducible representations ofG have support (see Section 2.1 for a precise definition). These particular C*-dynamical systems have a rich structured and many relevant properties hold, for instance, a Parseval like identity (cf. [5, Section 2]). Moreover, there is an abstract characterization by means of a suitable non full inclusion of C*-categoriesT ⊂T,C whereT isasymmetrictensorcategory withsimpleunit,conjugates, subobjectsanddirectsumsC (cf. [5]). A similar construction appeared in by Mu¨ger in [24], using crossed products of braided tensor *-categories with simple units w.r.t. a full symmetric subcategory. The C*-dynamical systems (F,G,β) in this more general context provide natural examples of tensor C*-categories with a nonsimple unit, since (ι,ι) =Z. The analysis of these kind of categories demands the extension of basic notions. For example, a new definition of irreducible object is needed (cf. [4, 5]). In this case the intertwiner space (ι,ι))C is a unital Abelian C*-algebra and an object ρ∈T is said to be irreducible if the following condition holds: (ρ,ρ) =1 ×(ι,ι), (3)ρ where 1 is the unit of the C*-algebra (ρ,ρ). In other words, (ρ,ρ) is generated by 1 as a (ι,ι)-ρ ρ module. Another new property that appears in the context of non-simple units is the action of a discrete Abelian group on (ι,ι). To any irreducible object ρ one can associate an automorphism α ∈AutZ by means ofρ 1 ⊗Z =α (Z)⊗1 , Z∈Z . (4)ρ ρ ρ bUsing this family of automorphisms{α} we define an equivalence relation onG, the dual of theρ ρ compact groupG, and the corresponding equivalence classes become the elements of a discrete Abelian group C(G), which we call the chain group ofG. The chain group is isomorphic to the character group of the center ofG and the map ρ →α induces an action of the chain group onρ Z, α:C(G)→AutZ, (5) (seeSection2.2). TheobstructiontohaveT symmetricisencodedintheactionα:T issymmetric if and only if α is trivial (cf. [5, Section 7]). C*-systemswithnon-simpleunit. Indeed,uptonowithasbeendoneonlyfor Abeliangroupsand in the setting of the C*-algebras of the canonical commutation resp. anticommutation relations in [1, 2]. Some indirect examples based on the abstract characterization in terms of the inclusion of C*-categoriesT ⊂T, can be found [5, Section 6].C The aim of the present article is to provide a large class of minimal C*-dynamical systems and Hilbert C*-systems for compact non-Abelian groups. These examples are labeled by the 3 1 1 1 1 1 1action of the chain group on the unital Abelian C*-algebraZ given in (5). A crucial part of our examples are the Cuntz-Pimsner algebras introduced by Pimsner in his seminal article [27]. This is a new family of C*-algebrasO that are naturally generated by a Hilbert bimoduleM M over a C*-algebraA. These algebras generalize Cuntz-Krieger algebras as well as crossed- products by the groupZ. In Pimsner’s constructionO is given as a quotient of a ToeplitzM like algebra acting on a concrete Fock space associated toM. An alternative abstract approach to Cuntz-Pimsner algebras in terms of C*-categories is given in [10, 20, 28]. In our models we Cuntz-Pimsner algebrasO associated to a certain freeZ-bimodules H =H⊗Z. The factorH H denotes a generating finite dimensional Hilbert space with an orthonormal basis specified by isometries{ψ} . The leftZ-action of the bimodule is defined in terms of the chain group actioni i (5). 1.1 Main results To state our first main result we need to introduce the familyG of all finite-dimensional repre-0 sentations V of the compact groupG that satisfy the following two properties: first, V admits an irreducible subrepresentation of dimension or multiplicity≥2 and, second, there is a natural n n numbern∈N such that⊗V contains the trivial representation ι, i.e. ι≺⊗V. Then we show: Main Theorem 1 (Theorem 4.9) LetG be a compact group,Z a unital Abelian C*-algebra and consider a fixed chain group action α: C(G)→ Aut(Z). Then for any V ∈ G there exists a0 Z-bimodule H =H ⊗Z with leftZ-action given in terms of α and a C*-dynamical systemV V (O ,G,β ), satisfying the following properties:H VV ′ G(i) (O ,G,β ) is minimal, i.e.A ∩O = Z, where A := O is the correspondingH V H VV V V HV fixed-point algebra. ′(ii) The Abelian C*-algebraZ coincides with the center of the fixed-point algebraA , i.e. A ∩V V A =Z.V Moreover, ifG is a compact Lie group, then the Hilbert spectrum of (O ,G,β ) is full, i.e. forH VV beach irreducible class D∈G there is an invariant Hilbert spaceH ⊂O (in this case notD HV necessarily of support ) such that β H specifies an irreducible representation of class D.V D An important step in the proof is to show that the corresponding bimodulesH are nonsingular.V This notion was introduced in [10] and is important for analyzing the relative commutants in the corresponding Cuntz-Pimsner algebras (see Section 3 for further details). We give a characteri- zation of the class of nonsingular bimodulesthat will appear in this article (cf. Proposition 3.12). The preceding theorem may be applied to the group SU(N) in order to define a corresponding minimal C*-dynamical system with full spectrum (cf. Example 4.10). To present examples of minimal C*-dynamical systems with full spectrum, where the Hilbert spaces inF that carry the irreducible representations of the group have support , we need a more elaborate construction: to begin with, we introduce a C*-algebra generated by a family of Cuntz-Pimsner algebras that are labeled by any family G of unitary, finite-dimensional repre- sentations ofG (see Subsection 5.1 for precise presentation of this algebra). This construction is interesting in itself and can be performed for coefficient algebras R which are not necessarily Abelian. Concretely we show: Main Theorem 2 (Theorem 5.6) Let G be a compact group, R a unital C*-algebra and α: C(G) → AutR a fixed action of the chain group C(G). Then, for every set G of finite- αdimensional representations ofG, there exists a universal C*-algebra R⋊ G generated by R and the Cuntz-Pimsner algebras{O } , where the product of the elements in the differentHV V ∈G 4 1 1algebras is twisted by the chain group action α. αThe C*-algebra R⋊ G (which we will also denote simply byF) generalizes some well-known constructions, obtained for particular choices of the family of representationsG, such as Cuntz- Pimsneralgebras, crossedproductsbysingleendomorphisms(a` laStacey) orcrossedproductsby αAbelian groups. Hilbert space representations ofR⋊ G are labeled by covariant representations of the C*-dynamical system (R,C(G),α). Now, we restrict the result of the Main Theorem 2 to the caseG=G with Abelian coefficient0 αalgebra R =Z. The C*-algebraF =Z⋊ G specifies prototypes of Hilbert C*-systems for0 non-Abelian groups in the context of non-simple units satisfying all the required properties: Main Theorem 3 (Theorem 5.14) LetG be a compact group,Z a unital Abelian C*-algebra and α:C(G)→AutZ a fixed chain group action. Given the set of finite-dimensional representations αG introduced above and the C*-algebraF :=Z⋊ G of the preceding theorem, there exists a0 0 ′minimal C*-dynamical system (F,G,β), i.e.A ∩F =Z, whereA is the corresponding fixed ′point algebra. Moreover,Z coincides with the center ofA, i.e.Z =A∩A, and for any V ∈G0 the Hilbert spaceH ⊂O ⊂F has support .V HV We may apply the preceding theorem to the groupG :=SU(2). Here we choose as the family of finite-dimensional representationsG all irreducible representations ofG with dimension≥ 2.0 This gives an explicit example of a Hilbert C*-system for SU(2) (cf. Example 5.15). Thestructureofthearticleisasfollows: InSection2wepresentthemaindefinitionsandresults concerning Hilbert C*-systems and the chain group. In Section 3 we recall the main features of Cuntz-Pimsner algebras that will be needed later. In the following section we present a family of minimal C*-dynamical systems for a compact groupG and a single Cuntz-Pimsner algebra. This family of examples is labeled by the chain group action (5) and the elements of a suitable classG of finite-dimensional representations ofG. In Section 5 we construct first a C*-algebra0 F generated by the Cuntz-Pimsner algebras{O } as described above. Then we show thatH V∈GV 0 withF we can construct a Hilbert C*-system in a natural way. We conclude this article with a short appendix restating some of the previous concrete results in terms of tensor categories of Hilbert bimodules. 1.2 Outlook Doplicher and Roberts show in the setting of the new duality of compact groups that essentially every concrete dual of a compact groupG may be realized in a natural way within a C*-algebra F, which is the C*-tensor product of Cuntz algebras (cf. [11]). Under additional assumptions it is shown that the corresponding fixed point algebra is simple and therefore must have a trivial center. The results in this paper generalize this situation. In fact, one may also realize concrete αgroup dualswithin the C*-algebraF :=Z⋊ G constructed in the Main Theorem 3, wherenow0 αthe corresponding fixed point algebra has a nontrivial centerZ. IfZ =C , thenZ⋊ G reduces to the tensor product of Cuntz algebras labeled by the finite dimensional representations of the compact group contained inG. As mentioned above our models provide natural examples of tensor C*-categories with a non- simple unit. These structureshave been studiedrecently in several problems inmathematics and mathematical physics: inthegeneral context of 2-categories (see [33] anreferences cited therein), in the study of group duality and vector bundles [30, 31], and in the context of superselection theory in the presence of quantum constraints [2]. Finally, algebras of quantum observables with nontrivialcenterZ alsoappearinlowerdimensionalquantumfieldtheorieswithbraidingsymme- try (see e.g. [15], [23,§2]). In particular, in the latter reference the vacuum representation of the global observable algebra is not faithful and maps central elements to scalars. In the mathemat- 5 1 1ical setting of this article, the analogue of the observable algebra is analyzed without making use of Hilbert space representations that trivialize the center. Moreover, the representation theory of a compact group is described by endomorphisms (i.e. the analogue of superselection sectors) that preserve the center. It is clear that our models do not fit completely in the frame given by lower dimensional quantum field theories, since, for example, we do not use any braiding symme- try. Nevertheless, we hope that some pieces of the analysis considered here can also be applied. E.g. the generalization of the notion of irreducible objects and the analysis of their restriction to the centerZ that in our context led to the definition of the chain group or the importance of Cuntz-Pimsner algebras associated toZ-bimodules. 2 Hilbert C*-systems and the chain group For convenience of the reader we recall the main definitions and results concerning Hilbert C*- systems that will be used later in the construction of the examples. We will also introduce the notion of the chain group associated to a compact group which will be crucial in the specification of the examples. For a more detailed analysis of Hilbert C*-systems we refer to [5, Sections 2 and 3] and [6, Chapter 10]). 2.1 Hilbert C*-systems Roughly speaking, a Hilbert C*-system is a special type of C*-dynamical system{F,G,β} that, in addition, contains the information of the representation category of the compact groupG.F denotes a unital C*-algebra and β:G∋ g →β ∈ AutF is a pointwise norm-continuous mor-g phism. Moreover, therepresentations ofG are carriedby the algebraic Hilbertspaces, i.e. Hilbert ∗spacesH⊂F, where the scalar producth·,·i ofH is given byhA,Bi := A B for A, B∈H. (Algebraic Hilbert spaces are also called in the literature Hilbert spaces in C*-algebras.) Hence- forth, we consider only finite-dimensional algebraic Hilbert spaces. The support suppH ofH isPd ∗defined by suppH := ψ ψ , where{ψ |j =1,..., d} is any orthonormal basis ofH.j jj=1 j TogiveaprecisedefinitionofaHilbertC*-systemweneedtointroducethespectralprojections: bfor D∈G (the dual ofG) its spectral projection Π ∈L(F) is defined byD Z Π (F) := χ (g)β (F)dg for all F∈F, (6)D D g G where χ (g) := dimD·TrU (g), U ∈D,D D D is the so-called modified character of the class D and dg is the normalized Haar measure of the bcompact groupG. For the trivial representation ι∈G, we put n o A:= ΠF = F∈F|g(F) =F, g∈G ,ι Gi.e.A =F is the fixed-point algebra inF w.r.t.G. We denote byZ =Z(A) the center ofA, which we assume to be nontrivial. Definition 2.1 The C*-dynamical system{F,G,β} with compact groupG is called a Hilbert bC*-system if it has full Hilbert spectrum, i.e. for each D∈G there is a β-stable Hilbert space H ⊂ Π F, with support and the unitary representation β H is in the equivalence classD D G D bD∈G. A Hilbert C*-system is called minimal if ′A∩F =Z, GwhereZ is the center of the fixed-point algebraA:=F . 6 1 1Since we can identifyG withβ ⊆AutF we will often denote the Hilbert C*-system simply byG {F,G}. Remark 2.2 Some families of examples of minimal Hilbert C*-systems with fixed point algebra A ⊗Z, whereA has trivial center, were constructed indirectly in [5, Section 6]. Some explicitC C examples in the context of the CAR/CCR-algebra with an Abelian group are given in [1] and [2, Section V]. To eachG-invariant algebraic Hilbert spaceH⊂F there is assigned a corresponding inner endomorphismρ ∈EndF given byH d(H)X ∗ρ (F) := ψ Fψ ,H j j j=1 where{ψ j = 1,..., d(H)} is any orthonormal basis ofH. It is easy to see thatA is stablej under the inner endomorphism ρ. We call canonical endomorphism the restriction of ρ toH A, i.e. ρ A∈ EndA. By abuse of notation we will also denote it simply by ρ . LetZ denoteH H the center ofA; we say that an endomorphismρ is irreducible if (ρ,ρ) =ρ(Z). In the nontrivial center situation canonical endomorphisms do not characterize the algebraic Hilbert spaces anymore. In fact, the natural generalization in this context is the following notion offreeHilbertZ-bimodule: letHbeaG-invariant algebraicHilbertspaceinF offinitedimension d. Then we define first the free rightZ-moduleH by extension   d X H :=HZ = ψ Z |Z ∈Z , (7)i i i   i=1 dwhere Ψ :={ψ} is an orthonormal basis inH. In other words, the set Ψ becomes a modulei i=1 basis ofH and dim H =d. For H ,H ∈H putZ 1 2 ∗H ,H :=H H ∈Z.1 2 21H Then, {H, ·,· } is a Hilbert right Z-module or a Hilbert Z-module, for short. Now the H canonical endomorphism can be also written as dX ∗ρ (A) := ϕ Aϕ , A∈A,H j j j=1 dwhere{ϕ} is any orthonormal basis of theZ-moduleH. Hence we actually haveρ =ρ andi H Hi=1 it is easy to show that H∈H iff HA=ρ (A)H.H In other words ρ characterizes uniquely the HilbertZ-module H. Moreover, since for anyH canonical endomorphism ρ = ρ we have thatZ ⊂ (ρ,ρ), it is easy to see that there is aH canonical left action ofZ on H. Concretely, there is a natural *-homomorphismZ →L(H), whereL(H) is the set ofZ-module morphisms (see [3, Sections 3 and 4] for more details). Hence H becomes aZ-bimodule. We conclude stating the isomorphism between the category of canonical endomorphisms and the corresponding category of freeZ-bimodules (cf. [5, Proposition 4.4] and [3, Section 4]). 7Proposition 2.3 Let{F,G} be a given minimal Hilbert C*-system, where the fixed point algebra A has centerZ. Then the categoryT of all canonical endomorphisms of{F,G} is isomorphic to the subcategoryM of the category of free HilbertZ-bimodules with objectsH =HZ, whereH isG aG-invariant algebraic Hilbert space with suppH = , and the arrows given by the correspondingσ G-invariant module morphismsL(H ,H ;G).1 2 The bijection of objects is given by ρ ↔H =HZ which satisfies the conditionsH ρ ◦ρ ←→ H ·H ,1 2 1 2 ∗ ∗where V,W∈A are isometries with VV +WW = and the latter product is the inner tensor product of the HilbertZ-modules w.r.t. the *-homomorphism Z → L(H ). The bijection on2 arrows is defined by X ∗J : L(H ,H ;G)→(ρ ,ρ ) with J(T) := ψ Z ϕ .1 2 1 2 j j,k k j,k Here{ψ} , {ϕ} are orthonormal basis of H ,H , respectively, and (Z ) is the matrix ofj j k k 2 1 j,k j,k the rightZ-linear operator T fromH toH which intertwines theG-actions.1 2 The preceding proposition shows that the canonical endomorphisms uniquely determine the correspondingZ-bimodules, but not the choice of the generating algebraic Hilbert spaces. The assumption of the minimality condition in Definition 2.1 is crucial here. From the point of view of theZ-bimodules it is natural to consider next the following property of Hilbert C*-systems: the existence of a special choice of algebraic Hilbert spaces within the modules that define the canonical endomorphisms and which is compatible with products. Definition 2.4 A Hilbert C*-system{F,G} is called regular if there is an assignmentT ∋σ→ H , whereH is aG-invariant algebraic Hilbert space with suppH = and σ =ρ (i.e. σ isσ σ σ Hσ the canonical endomorphism of the algebraic Hilbert spaceH ), which is compatible with products:σ σ◦τ →H·H .σ τ Remark 2.5 InaminimalHilbertC*-systemregularitymeansthatthereisa“generating”Hilbert spaceH ⊂ H for each τ (with H =HZ) such that the compatibility relation for productsτ τ τ τ statedinDefinition2.4holds. IfaHilbertC*-systemisminimalandZ =C thenitisnecessarily regular. 2.2 The chain group In the present section we recall the main motivations and definitions concerning the chain group associated with a compact groupG. For proofs and more details see [5, Section 5] (see also [25]). One of the fundamental new aspects of superselection theory with a nontrivial centerZ is the fact that irreducible canonical endomorphisms act as (nontrivial) automorphisms onZ. In fact, blet D∈G (the dual ofG) and denote by ρ := ρ the corresponding irreducible canonicalD HD endomorphism. Then, to any class D we can associate the following automorphism onZ: bG∋D →α :=ρ Z∈AutZ. (8)D D bThis observation allows one to introduce a natural equivalence relation in the dualG which, ′ broughly speaking, relates elements D,D ∈G if there is a “chain of tensor products” of elements ′binG containing D andD (see Theorem 2.10 and Remark 2.11 below). 8 1 1 1 1 Un pour Un Permettre à tous d'accéder à la lecture Pour chaque accès à la bibliothèque, YouScribe donne un accès à une personne dans le besoin
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.958928108215332, "perplexity": 2486.9598650657144}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105086.81/warc/CC-MAIN-20170818175604-20170818195604-00078.warc.gz"}
http://mathhelpforum.com/calculus/100654-can-t-evaluate-integral.html
# Thread: Can't evaluate this integral 1. ## Can't evaluate this integral I can't follow a proof in my physics class. It involves this: $\int^{2pi}_0 e^{i(n+1)x}dx$ I am suppose to get 2pi for n=-1 and 0 otherwise. I got: $\frac{e^{i(n+1)x}}{i(n+1)}$ Using euler's formula I got: $\frac{sin[(n+1)x]}{n+1}-i \frac{[cos(n+1)x]}{n+1}$ Pluging in the limits I only got 0 regardless of what n is. What am I missing? 2. For n = -1, the integral is $\int_{0}^{2 \pi} 1 \ dx = 2 \pi$. 3. OMG........thank.... I spent so long working blind....... I was trying to plug in n after I do the work.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9818657040596008, "perplexity": 1779.565840300794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.2/warc/CC-MAIN-20170219104611-00372-ip-10-171-10-108.ec2.internal.warc.gz"}
https://homework.cpm.org/category/CON_FOUND/textbook/a2c/chapter/13/lesson/13.2.2/problem/13-113
### Home > A2C > Chapter 13 > Lesson 13.2.2 > Problem13-113 13-113. Solve for $x$. 1. Use trigonometric relationships. $x ≈ 69.34'$ 1. Use the Law of Cosines. $x ≈ 5.35$
{"extraction_info": {"found_math": true, "script_math_tex": 3, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9904433488845825, "perplexity": 17585.791650167925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571147.84/warc/CC-MAIN-20220810040253-20220810070253-00463.warc.gz"}
http://mathhelpforum.com/calculus/192587-implicit-differentiation-find-special-points-circle.html
# Math Help - Implicit differentiation to find special points on a circle 1. ## Implicit differentiation to find special points on a circle This was on last years paper for my upcoming test. My workings got so convoluted that I suspect I have strayed off track. I was wondering if I have overlooked something simple and made it more complicated than it needs to be? Use implicit differentiation to find the points on the circle $x^2-2y+y^2-2x-2=0$ closest to and farthest away from the origin. Taking the derivative I get: $\frac{dy}{dx} = \frac{2-2x}{2y-x} = \frac{1-x}{y-1}$ Setting the derivative = 0, I get x = 1 and therefore the top and bottom of the circle: (1,3) and (1,-1) After graphing the function, I can see that the two points will be where the line y=x intersects the circle. To find where this happens, I tried to find where the gradient of the circle is perpendicular to that of the line y=x. $\frac{1-x}{y-1} = -1$ $1-x=1-y$ $y=x$ :S So then I tried setting the original equation = y-x $x^2-2y+y^2-2x-2 = y-x$ $y^2-3y=x-x^2+2$ $y^2-3y+\frac{9}{4}=x-x^2+\frac{17}{4}$ $(y-\frac{3}{2})^2=x-x^2+\frac{17}{4}$ $y = \sqrt{x-x^2+\frac{17}{4}}+\frac{3}{2}$ I'm lost on this one :S 2. ## Re: Implicit differentiation to find special points on a circle Never mind, after reading my own post I think I have it. I noticed that since the points occur on the line y=x, then the x and y values will equal each other. So, I just re-wrote the original formula for the circle substituting x = y into it to get: $x^2-2x+x^2-2x-2=0$ $x^2-2x-1=0$ $x= \frac{2\pm \sqrt{8}}{2} = 1\pm \sqrt{2}$ Which looks right. But I didn't really use implicit differentiation to find the two points did I? It was more geometry :S is there another way? I hope I don't get that on my test
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8954317569732666, "perplexity": 245.28715102904093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663467.44/warc/CC-MAIN-20140930004103-00108-ip-10-234-18-248.ec2.internal.warc.gz"}
http://phd.chjh.nl/examining-statistical-properties-of-the-fisher-test.html
A Examining statistical properties of the Fisher test The Fisher test to detect false negatives is only useful if it is powerful enough to detect evidence of at least one false negative result in papers with few nonsignificant results. Therefore we examined the specificity and sensitivity of the Fisher test to test for false negatives, with a simulation study of the one sample $$t$$-test. Throughout this chapter, we apply the Fisher test with $$\alpha_{Fisher}=0.10$$, because tests that inspect whether results are “too good to be true” typically also use alpha levels of 10% (Sterne, Gavaghan, and Egger 2000; Ioannidis and Trikalinos 2007; Francis 2012). The simulation procedure was carried out for conditions in a three-factor design, where power of the Fisher test was simulated as a function of sample size $$N$$, effect size $$\eta$$, and $$k$$ test results. The three factor design was a 3 (sample size $$N$$: 33, 62, 119) by 100 (effect size $$\eta$$: .00, .01, .02, …, .99) by 18 ($$k$$ test results: 1, 2, 3, …, 10, 15, 20, …, 50) design, resulting in 5,400 conditions. The levels for sample size were determined based on the 25th, 50th, and 75th percentile for the degrees of freedom ($$df2$$) in the observed dataset for Application 1. Each condition contained 10,000 simulations. The power of the Fisher test for one condition was calculated as the proportion of significant Fisher test results given $$\alpha_{Fisher}=0.10$$. If the power for a specific effect size $$\eta$$ was $$\geq99.5\%$$, power for larger effect sizes were set to 1. We simulated false negative $$p$$-values according to the following six steps (see Figure A.1. First, we determined the critical value under the null distribution. Second, we determined the distribution under the alternative hypothesis by computing the non-centrality parameter as $$\delta=(\eta^2/1-\eta^2)N$$ (Steiger and Fouladi 1997; Smithson 2001). Third, we calculated the probability that a result under the alternative hypothesis was, in fact, nonsignificant (i.e., $$\beta$$). Fourth, we randomly sampled, uniformly, a value between $$0-\beta$$. Fifth, with this value we determined the accompanying $$t$$-value. Finally, we computed the $$p$$-value for this $$t$$-value under the null distribution. We repeated the procedure to simulate a false negative $$p$$-value $$k$$ times and used the resulting $$p$$-values to compute the Fisher test. Before computing the Fisher test statistic, the nonsignificant $$p$$-values were transformed (see Equation (4.1)). Subsequently, we computed the Fisher test statistic and the accompanying $$p$$-value according to Equation (4.2). References Francis, Gregory. 2012. “Too Good to Be True: Publication Bias in Two Prominent Studies from Experimental Psychology.” Psychonomic Bulletin & Review 19 (2). Springer Nature: 151–56. doi:10.3758/s13423-012-0227-9. Ioannidis, John PA, and Thomas A Trikalinos. 2007. “An Exploratory Test for an Excess of Significant Findings.” Clinical Trials: Journal of the Society for Clinical Trials 4 (3). SAGE Publications: 245–53. doi:10.1177/1740774507079441. Smithson, Michael. 2001. “Correct Confidence Intervals for Various Regression Effect Sizes and Parameters: The Importance of Noncentral Distributions in Computing Intervals.” Educational and Psychological Measurement 61 (4). SAGE Publications: 605–32. doi:10.1177/00131640121971392. Steiger, James H, and Rachel T Fouladi. 1997. “Noncentrality interval estimation and the evaluation of statistical models.” In What If There Were No Significance Tests, edited by Lisa L. Harlow, Stanley A. Mulaik, and James H Steiger. New York, NY: Psychology Press. Sterne, Jonathan A.C, David Gavaghan, and Matthias Egger. 2000. “Publication and Related Bias in Meta-Analysis.” Journal of Clinical Epidemiology 53 (11). Elsevier BV: 1119–29. doi:10.1016/s0895-4356(00)00242-0.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9300059676170349, "perplexity": 2473.5277332329447}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00337.warc.gz"}
https://worldwidescience.org/topicpages/a/atomic+chains+surface.html
#### Sample records for atomic chains surface 1. Formation and structural phase transition in Co atomic chains on a Cu(775) surface International Nuclear Information System (INIS) Syromyatnikov, A. G.; Kabanov, N. S.; Saletsky, A. M.; Klavsyuk, A. L. 2017-01-01 The formation of Co atomic chains on a Cu(775) surface is investigated by the kinetic Monte Carlo method. It is found that the length of Co atomic chains formed as a result of self-organization during epitaxial growth is a random quantity and its mean value depends on the parameters of the experiment. The existence of two structural phases in atomic chains is detected using the density functional theory. In the first phase, the separations between an atom and its two nearest neighbors in a chain are 0.230 and 0.280 nm. In the second phase, an atomic chain has identical atomic spacings of 0.255 nm. It is shown that the temperature of the structural phase transition depends on the length of the atomic chain. 2. Formation and structural phase transition in Co atomic chains on a Cu(775) surface Energy Technology Data Exchange (ETDEWEB) Syromyatnikov, A. G.; Kabanov, N. S.; Saletsky, A. M.; Klavsyuk, A. L., E-mail: [email protected] [Moscow State University (Russian Federation) 2017-01-15 The formation of Co atomic chains on a Cu(775) surface is investigated by the kinetic Monte Carlo method. It is found that the length of Co atomic chains formed as a result of self-organization during epitaxial growth is a random quantity and its mean value depends on the parameters of the experiment. The existence of two structural phases in atomic chains is detected using the density functional theory. In the first phase, the separations between an atom and its two nearest neighbors in a chain are 0.230 and 0.280 nm. In the second phase, an atomic chain has identical atomic spacings of 0.255 nm. It is shown that the temperature of the structural phase transition depends on the length of the atomic chain. 3. Self-lacing atom chains International Nuclear Information System (INIS) Zandvliet, Harold J W; Van Houselt, Arie; Poelsema, Bene 2009-01-01 The structural and electronic properties of self-lacing atomic chains on Pt modified Ge(001) surfaces have been studied using low-temperature scanning tunnelling microscopy and spectroscopy. The self-lacing chains have a cross section of only one atom, are perfectly straight, thousands of atoms long and virtually defect free. The atomic chains are composed of dimers that have their bonds aligned in a direction parallel to the chain direction. At low temperatures the atomic chains undergo a Peierls transition: the periodicity of the chains doubles from a 2 x to a 4 x periodicity and an energy gap opens up. Furthermore, at low temperatures (T<80 K) novel quasi-one-dimensional electronic states are found. These quasi-one-dimensional electronic states originate from an electronic state of the underlying terrace that is confined between the atomic chains. 4. Correlation between morphology, electron band structure, and resistivity of Pb atomic chains on the Si(5 5 3)-Au surface International Nuclear Information System (INIS) Jałochowski, M; Kwapiński, T; Łukasik, P; Nita, P; Kopciuszyński, M 2016-01-01 Structural and electron transport properties of multiple Pb atomic chains fabricated on the Si(5 5 3)-Au surface are investigated using scanning tunneling spectroscopy, reflection high electron energy diffraction, angular resolved photoemission electron spectroscopy and in situ electrical resistance. The study shows that Pb atomic chains growth modulates the electron band structure of pristine Si(5 5 3)-Au surface and hence changes its sheet resistivity. Strong correlation between chains morphology, electron band structure and electron transport properties is found. To explain experimental findings a theoretical tight-binding model of multiple atomic chains interacting on effective substrate is proposed. (paper) 5. A study of atom zigzag chains on the surface of tungsten International Nuclear Information System (INIS) Audiffren, M.; Traimond, P.; Bardon, J.; Drechsler, M. 1978-01-01 Nishigaki and Nakamura have observed zigzag chains on the central (011) face of tungsten after field evaporation at T > 140 K. In this paper, a study of the formation, disappearance and structure of such chains is described. Tungsten tips of small radii down to 60 A were used. Chains of 3 to 9 spots, that are clearly visible, are found even at 90 K. Four different structure models of the zigzag chains are discussed, including the multibranch model proposed by the japanese authors. The interpretation of the experimental results shows fairly clearly that the real zigzag chain structure is a special non-dense structure. It must be formed by a local displacement of the tungsten adatoms in the field. Without the field, a zigzag chain is transformed into a two-dimensional cluster of the nearest neighbour atom by a small increase in temperature. If the field is reintroduced, the cluster can revert to the initial zigzag structure. The zigzag structure is interpreted as being caused by forces of repulsion between the atom dipoles. (Auth.) 6. CHAINS-PC, Decay Chain Atomic Densities International Nuclear Information System (INIS) 1994-01-01 1 - Description of program or function: CHAINS computes the atom density of members of a single radioactive decay chain. The linearity of the Bateman equations allows tracing of interconnecting chains by manually accumulating results from separate calculations of single chains. Re-entrant loops can be treated as extensions of a single chain. Losses from the chain are also tallied. 2 - Method of solution: The Bateman equations are solved analytically using double-precision arithmetic. Poles are avoided by small alterations of the loss terms. Multigroup fluxes, cross sections, and self-shielding factors entered as input are used to compute the effective specific reaction rates. The atom densities are computed at any specified times. 3 - Restrictions on the complexity of the problem: Maxima of 100 energy groups, 100 time values, 50 members in a chain 7. Chain formation of metal atoms DEFF Research Database (Denmark) Bahn, Sune Rastad; Jacobsen, Karsten Wedel 2001-01-01 The possibility of formation of single-atomic chains by manipulation of nanocontacts is studied for a selection of metals (Ni, Pd, Pt, Cu, Ag, Au). Molecular dynamics simulations show that the tendency for chain formation is strongest for Au and Pt. Density functional theory calculations indicate...... that the metals which form chains exhibit pronounced many-atom interactions with strong bonding in low coordinated systems.... 8. DOS cones along atomic chains Science.gov (United States) Kwapiński, Tomasz 2017-03-01 The electron transport properties of a linear atomic chain are studied theoretically within the tight-binding Hamiltonian and the Green’s function method. Variations of the local density of states (DOS) along the chain are investigated. They are crucial in scanning tunnelling experiments and give important insight into the electron transport mechanism and charge distribution inside chains. It is found that depending on the chain parity the local DOS at the Fermi level can form cone-like structures (DOS cones) along the chain. The general condition for the local DOS oscillations is obtained and the linear behaviour of the local density function is confirmed analytically. DOS cones are characterized by a linear decay towards the chain which is in contrast to the propagation properties of charge density waves, end states and Friedel oscillations in one-dimensional systems. We find that DOS cones can appear due to non-resonant electron transport, the spin-orbit scattering or for chains fabricated on a substrate with localized electrons. It is also shown that for imperfect chains (e.g. with a reduced coupling strength between two neighboring sites) a diamond-like structure of the local DOS along the chain appears. 9. DOS cones along atomic chains International Nuclear Information System (INIS) Kwapiński, Tomasz 2017-01-01 The electron transport properties of a linear atomic chain are studied theoretically within the tight-binding Hamiltonian and the Green’s function method. Variations of the local density of states (DOS) along the chain are investigated. They are crucial in scanning tunnelling experiments and give important insight into the electron transport mechanism and charge distribution inside chains. It is found that depending on the chain parity the local DOS at the Fermi level can form cone-like structures (DOS cones) along the chain. The general condition for the local DOS oscillations is obtained and the linear behaviour of the local density function is confirmed analytically. DOS cones are characterized by a linear decay towards the chain which is in contrast to the propagation properties of charge density waves, end states and Friedel oscillations in one-dimensional systems. We find that DOS cones can appear due to non-resonant electron transport, the spin–orbit scattering or for chains fabricated on a substrate with localized electrons. It is also shown that for imperfect chains (e.g. with a reduced coupling strength between two neighboring sites) a diamond-like structure of the local DOS along the chain appears. (paper) 10. Surface effects on the mechanical elongation of AuCu nanowires: De-alloying and the formation of mixed suspended atomic chains International Nuclear Information System (INIS) Lagos, M. J.; Autreto, P. A. S.; Galvao, D. S.; Ugarte, D.; Bettini, J.; Sato, F.; Dantas, S. O. 2015-01-01 We report here an atomistic study of the mechanical deformation of Au x Cu (1−x) atomic-size wires (nanowires (NWs)) by means of high resolution transmission electron microscopy experiments. Molecular dynamics simulations were also carried out in order to obtain deeper insights on the dynamical properties of stretched NWs. The mechanical properties are significantly dependent on the chemical composition that evolves in time at the junction; some structures exhibit a remarkable de-alloying behavior. Also, our results represent the first experimental realization of mixed linear atomic chains (LACs) among transition and noble metals; in particular, surface energies induce chemical gradients on NW surfaces that can be exploited to control the relative LAC compositions (different number of gold and copper atoms). The implications of these results for nanocatalysis and spin transport of one-atom-thick metal wires are addressed 11. Surface effects on the mechanical elongation of AuCu nanowires: De-alloying and the formation of mixed suspended atomic chains Energy Technology Data Exchange (ETDEWEB) Lagos, M. J. [Instituto de Física Gleb Wataghin, Universidade Estadual de Campinas, R. Sergio B. de Holanda 777, 13083-859 Campinas-SP (Brazil); Laboratório Nacional de Nanotecnologia-LNNANO, 13083-970 Campinas-SP (Brazil); Autreto, P. A. S.; Galvao, D. S., E-mail: [email protected]; Ugarte, D. [Instituto de Física Gleb Wataghin, Universidade Estadual de Campinas, R. Sergio B. de Holanda 777, 13083-859 Campinas-SP (Brazil); Bettini, J. [Laboratório Nacional de Nanotecnologia-LNNANO, 13083-970 Campinas-SP (Brazil); Sato, F.; Dantas, S. O. [Departamento de Física, ICE, Universidade Federal de Juiz de Fora, 36036-330 Juiz de Fora-MG (Brazil) 2015-03-07 We report here an atomistic study of the mechanical deformation of Au{sub x}Cu{sub (1−x)} atomic-size wires (nanowires (NWs)) by means of high resolution transmission electron microscopy experiments. Molecular dynamics simulations were also carried out in order to obtain deeper insights on the dynamical properties of stretched NWs. The mechanical properties are significantly dependent on the chemical composition that evolves in time at the junction; some structures exhibit a remarkable de-alloying behavior. Also, our results represent the first experimental realization of mixed linear atomic chains (LACs) among transition and noble metals; in particular, surface energies induce chemical gradients on NW surfaces that can be exploited to control the relative LAC compositions (different number of gold and copper atoms). The implications of these results for nanocatalysis and spin transport of one-atom-thick metal wires are addressed. 12. Atom-surface potentials and atom interferometry International Nuclear Information System (INIS) Babb, J.F. 1998-01-01 Long-range atom-surface potentials characterize the physics of many actual systems and are now measurable spectroscopically in deflection of atomic beams in cavities or in reflection of atoms in atomic fountains. For a ground state, spherically symmetric atom the potential varies as -1/R 3 near the wall, where R is the atom-surface distance. For asymptotically large distances the potential is weaker and goes as -1/R 4 due to retardation arising from the finite speed of light. This diminished interaction can also be interpreted as a Casimir effect. The possibility of measuring atom-surface potentials using atomic interferometry is explored. The particular cases studied are the interactions of a ground-state alkali-metal atom and a dielectric or a conducting wall. Accurate descriptions of atom-surface potentials in theories of evanescent-wave atomic mirrors and evanescent wave-guided atoms are also discussed. (author) 13. Surface parameter characterization of surface vibrations in linear chains International Nuclear Information System (INIS) Majlis, N.; Selzer, S.; Puszkarski, H.; Diep-The-Hung 1982-12-01 We consider the vibrations of a linear monatomic chain with a complex surface potential defined by the surface pinning parameter a=Aesup(-i psi). It is found that in the case of a semi-infinite chain a is connected with the surface vibration wave number k=s+it by the exact relations: s=psi, t=lnA. We also show that the solutions found can be regarded as approximate ones (in the limit L>>1) for surface vibrations of a finite chain consisting of L atoms. (author) 14. Atomic beams probe surface vibrations International Nuclear Information System (INIS) Robinson, A.L. 1982-01-01 In the last two years, surface scientist have begun trying to obtain the vibrational frequencies of surface atoms in both insulating and metallic crystals from beams of helium atoms. It is the inelastic scattering that researchers use to probe surface vibrations. Inelastic atomic beam scattering has only been used to obtain vibrational frequency spectra from clean surfaces. Several experiments using helium beams are cited. (SC) 15. Electrospun regenerated cellulose nanofibrous membranes surface-grafted with polymer chains/brushes via the atom transfer radical polymerization method for catalase immobilization. Science.gov (United States) Feng, Quan; Hou, Dayin; Zhao, Yong; Xu, Tao; Menkhaus, Todd J; Fong, Hao 2014-12-10 In this study, an electrospun regenerated cellulose (RC) nanofibrous membrane with fiber diameters of ∼200-400 nm was prepared first; subsequently, 2-hydroxyethyl methacrylate (HEMA), 2-dimethylaminoethyl methacrylate (DMAEMA), and acrylic acid (AA) were selected as the monomers for surface grafting of polymer chains/brushes via the atom transfer radical polymerization (ATRP) method. Thereafter, four nanofibrous membranes (i.e., RC, RC-poly(HEMA), RC-poly(DMAEMA), and RC-poly(AA)) were explored as innovative supports for immobilization of an enzyme of bovine liver catalase (CAT). The amount/capacity, activity, stability, and reusability of immobilized catalase were evaluated, and the kinetic parameters (Vmax and Km) for immobilized and free catalase were determined. The results indicated that the respective amounts/capacities of immobilized catalase on RC-poly(HEMA) and RC-poly(DMAEMA) nanofibrous membranes reached 78 ± 3.5 and 67 ± 2.7 mg g(-1), which were considerably higher than the previously reported values. Meanwhile, compared to that of free CAT (i.e., 18 days), the half-life periods of RC-CAT, RC-poly(HEMA)-CAT, RC-poly(DMAEMA)-CAT, and RC-poly(AA)-CAT were 49, 58, 56, and 60 days, respectively, indicating that the storage stability of immobilized catalase was also significantly improved. Furthermore, the immobilized catalase exhibited substantially higher resistance to temperature variation (tested from 5 to 70 °C) and lower degree of sensitivity to pH value (tested from 4.0 and 10.0) than the free catalase. In particular, according to the kinetic parameters of Vmax and Km, the nanofibrous membranes of RC-poly(HEMA) (i.e., 5102 μmol mg(-1) min(-1) and 44.89 mM) and RC-poly(DMAEMA) (i.e., 4651 μmol mg(-1) min(-1) and 46.98 mM) had the most satisfactory biocompatibility with immobilized catalase. It was therefore concluded that the electrospun RC nanofibrous membranes surface-grafted with 3-dimensional nanolayers of polymer chains/brushes would be 16. Cold atoms close to surfaces DEFF Research Database (Denmark) Krüger, Peter; Wildermuth, Stephan; Hofferberth, Sebastian 2005-01-01 Microscopic atom optical devices integrated on atom chips allow to precisely control and manipulate ultra-cold (T atoms and Bose-Einstein condensates (BECs) close to surfaces. The relevant energy scale of a BEC is extremely small (down to ... be utilized as a sensor for variations of the potential energy of the atoms close to the surface. Here we describe how to use trapped atoms as a measurement device and analyze the performance and flexibility of the field sensor. We demonstrate microscopic magnetic imaging with simultaneous high spatial... 17. Chain reaction. History of the atomic bomb International Nuclear Information System (INIS) Mania, Hubert 2010-01-01 Henri becquerel tracked down in 1896 a strange radiation, which was called radioactivity by Marie Curie. In the following centuries German scientists Max Planck, Albert Einstein and Werner Heisenberg presented fundamental contributions to understand processes in the atomic nucleus. At Goettingen, center of the international nuclear physics community, the American student J. Robert Oppenheimer admit to this physical research. In the beginning of 1939 the message of Otto Hahns' nuclear fission electrified researchers. The first step, unleashing atomic energy, was done. A half year later the Second World War begun. And suddenly being friend with and busily communicating physicians were devided into hostile power blocs as bearers of official secrets. The author tells in this exciting book the story of the first atomic bomb as a chain reaction of ideas, discoveries and visions, of friendships, jealousy and intrigues of scientists, adventurers and genius. (orig./GL) 18. Formation and properties of metal-oxygen atomic chains DEFF Research Database (Denmark) Thijssen, W.H.A.; Strange, Mikkel; de Brugh, J.M.J.A. 2008-01-01 of longer atomic chains. The mechanical and electrical properties of these diatomic chains have been investigated by determining local vibration modes of the chain and by measuring the dependence of the average chain-conductance on the length of the chain. Additionally, we have performed calculations......Suspended chains consisting of single noble metal and oxygen atoms have been formed. We provide evidence that oxygen can react with and be incorporated into metallic one-dimensional atomic chains. Oxygen incorporation reinforces the linear bonds in the chain, which facilitates the creation... 19. Symmetry chains for the atomic shell model. I. Classification of symmetry chains for atomic configurations International Nuclear Information System (INIS) Gruber, B.; Thomas, M.S. 1980-01-01 In this article the symmetry chains for the atomic shell model are classified in such a way that they lead from the group SU(4l+2) to its subgroup SOsub(J)(3). The atomic configurations (nl)sup(N) transform like irreducible representations of the group SU(4l+2), while SOsub(J)(3) corresponds to total angular momentum in SU(4l+2). The defining matrices for the various embeddings are given for each symmetry chain that is obtained. These matrices also define the projection onto the weight subspaces for the corresponding subsymmetries and thus relate the various quantum numbers and determine the branching of representations. It is shown in this article that three (interrelated) symmetry chains are obtained which correspond to L-S coupling, j-j coupling, and a seniority dependent coupling. Moreover, for l<=6 these chains are complete, i.e., there are no other chains but these. In articles to follow, the symmetry chains that lead from the group SO(8l+5) to SOsub(J)(3) will be discussed, with the entire atomic shell transforming like an irreducible representation of SO(8l+5). The transformation properties of the states of the atomic shell will be determined according to the various symmetry chains obtained. The symmetry lattice discussed in this article forms a sublattice of the larger symmetry lattice with SO(8l+5) as supergroup. Thus the transformation properties of the states of the atomic configurations, according to the various symmetry chains discussed in this article, will be obtained too. (author) 20. Majorana spin in magnetic atomic chain systems Science.gov (United States) Li, Jian; Jeon, Sangjun; Xie, Yonglong; Yazdani, Ali; Bernevig, B. Andrei 2018-03-01 In this paper, we establish that Majorana zero modes emerging from a topological band structure of a chain of magnetic atoms embedded in a superconductor can be distinguished from trivial localized zero energy states that may accidentally form in this system using spin-resolved measurements. To demonstrate this key Majorana diagnostics, we study the spin composition of magnetic impurity induced in-gap Shiba states in a superconductor using a hybrid model. By examining the spin and spectral densities in the context of the Bogoliubov-de Gennes (BdG) particle-hole symmetry, we derive a sum rule that relates the spin densities of localized Shiba states with those in the normal state without superconductivity. Extending our investigations to a ferromagnetic chain of magnetic impurities, we identify key features of the spin properties of the extended Shiba state bands, as well as those associated with a localized Majorana end mode when the effect of spin-orbit interaction is included. We then formulate a phenomenological theory for the measurement of the local spin densities with spin-polarized scanning tunneling microscopy (STM) techniques. By combining the calculated spin densities and the measurement theory, we show that spin-polarized STM measurements can reveal a sharp contrast in spin polarization between an accidental-zero-energy trivial Shiba state and a Majorana zero mode in a topological superconducting phase in atomic chains. We further confirm our results with numerical simulations that address generic parameter settings. 1. Defect-induced conductance oscillations in short atomic chains International Nuclear Information System (INIS) 2012-01-01 Electronic transport through a junction made of two gold electrodes connected with a gold chain containing a silver impurity is analyzed with a tight binding model and the density-functional theory. It is shown that the conductance depends in a simple way on the position of the impurity in the chain and the parity of the total number of atoms of the chain. For an odd chain the conductance takes on a higher value when the Ag impurity substitutes an even Au atom in the chain, and a lower one for an odd position of the Ag atom. In the case of an even chain the conductance hardly depends on the position of the Ag atom. This new kind of a defect-induced parity oscillation of the conductance is significantly more prominent than the well-known even-odd effect related to the dependence of the conductance on the parity of number of atoms in perfect chains. (paper) 2. Preparation of Transparent Bulk TiO2/PMMA Hybrids with Improved Refractive Indices via an in Situ Polymerization Process Using TiO2 Nanoparticles Bearing PMMA Chains Grown by Surface-Initiated Atom Transfer Radical Polymerization. Science.gov (United States) Maeda, Satoshi; Fujita, Masato; Idota, Naokazu; Matsukawa, Kimihiro; Sugahara, Yoshiyuki 2016-12-21 Transparent TiO 2 /PMMA hybrids with a thickness of 5 mm and improved refractive indices were prepared by in situ polymerization of methyl methacrylate (MMA) in the presence of TiO 2 nanoparticles bearing poly(methyl methacrylate) (PMMA) chains grown using surface-initiated atom transfer radical polymerization (SI-ATRP), and the effect of the chain length of modified PMMA on the dispersibility of modified TiO 2 nanoparticles in the bulk hybrids was investigated. The surfaces of TiO 2 nanoparticles were modified with both m-(chloromethyl)phenylmethanoyloxymethylphosphonic acid bearing a terminal ATRP initiator and isodecyl phosphate with a high affinity for common organic solvents, leading to sufficient dispersibility of the surface-modified particles in toluene. Subsequently, SI-ATRP of MMA was achieved from the modified surfaces of the TiO 2 nanoparticles without aggregation of the nanoparticles in toluene. The molecular weights of the PMMA chains cleaved from the modified TiO 2 nanoparticles increased with increases in the prolonging of the polymerization period, and these exhibited a narrow distribution, indicating chain growth controlled by SI-ATRP. The nanoparticles bearing PMMA chains were well-dispersed in MMA regardless of the polymerization period. Bulk PMMA hybrids containing modified TiO 2 nanoparticles with a thickness of 5 mm were prepared by in situ polymerization of the MMA dispersion. The transparency of the hybrids depended significantly on the chain length of the modified PMMA on the nanoparticles, because the modified PMMA of low molecular weight induced aggregation of the TiO 2 nanoparticles during the in situ polymerization process. The refractive indices of the bulk hybrids could be controlled by adjusting the TiO 2 content and could be increased up to 1.566 for 6.3 vol % TiO 2 content (1.492 for pristine PMMA). 3. Effect of temperature on atom-atom collision chain length in metals International Nuclear Information System (INIS) Makarov, A.A.; Demkin, N.A.; Lyashchenko, B.G. 1981-01-01 Focused atom-atom collision chain lengths are calculated for fcc-crystals with account of thermal oscillations. The model of solid spheres with the Born-Merier potential has been used in the calculations. The dependence of chain lengths on the temperature, energy and movement direction of the first chain atom for Cu, Au, Ag, Pb, Ni is considered. The plot presented shows that the chain lengths strongly decrease with temperature growth, for example, for the gold at T=100 K the chain length equals up to 37 interatomic spacings, whereas at T=1000 K their length decreases down to 5 interatomic distances. The dependence of the energy loss by the chain atoms on the atom number in the chain is obtained in a wide range of crystal temperature and the primary chain atom energy [ru 4. Experimental realization of suspended atomic chains composed of different atomic species International Nuclear Information System (INIS) Bettini, Jefferson; Ugarte, Daniel; Sato, Fernando; Galvao, Douglas Soares; Coura, Pablo Zimmerman; Dantas, Socrates de Oliveira 2006-01-01 We report high resolution transmission electron microscopy (HRTEM) and molecular dynamics results of the first experimental test of suspended atomic chains composed of different atomic species formed from spontaneous stretching of metallic nanowires. (author) 5. Mechanisms and energetics of surface atomic processes International Nuclear Information System (INIS) Tsong, T.T. 1991-01-01 The energies involved in various surface atomic processes such as surface diffusion, the binding of small atomic clusters on the surface, the interaction between two adsorbed atoms, the dissociation of an atom from a small cluster or from a surface layer, the binding of kink size atoms or atoms at different adsorption sites to the surface etc., can be derived from an analysis of atomically resolved field ion microscope images and a kinetic energy measurement of low temperature field desorbed ions using the time-of-flight atom-probe field ion microscope. These energies can be used to compare with theories and to understand the transport of atoms on the surface in atomic reconstructions, epitaxial growth of surface layers and crystal growth, adsorption layer superstructure formation, and also why an atomic ordering or atomic reconstruction at the surface is energetically favored. Mechanisms of some of the surface atomic processes are also clarified from these quantitative, atomic resolution studies. In this paper work in this area is bris briefly reviewed 6. Atomic probes of surface structure and dynamics International Nuclear Information System (INIS) Heller, E.J.; Jonsson, H. 1992-01-01 The following were studied: New semiclassical method for scattering calculations, He atom scattering from defective Pt surfaces, He atom scattering from Xe overlayers, thermal dissociation of H 2 on Cu(110), spin flip scattering of atoms from surfaces, and Car-Parrinello simulations of surface processes 7. Modified Li chains as atomic switches KAUST Repository Wunderlich, Thomas; Akgenc, Berna; Eckern, Ulrich; Schuster, Cosima; Schwingenschlö gl, Udo 2013-01-01 We present electronic structure and transport calculations for hydrogen and lithium chains, using density functional theory and scattering theory on the Green's function level, to systematically study impurity effects on the transmission coefficient 8. Enhanced binding capacity of boronate affinity adsorbent via surface modification of silica by combination of atom transfer radical polymerization and chain-end functionalization for high-efficiency enrichment of cis-diol molecules Energy Technology Data Exchange (ETDEWEB) Wang, Wei; He, Maofang; Wang, Chaozhan; Wei, Yinmao, E-mail: [email protected] 2015-07-30 9. PREFACE: Atom-surface scattering Atom-surface scattering Science.gov (United States) 2010-08-01 It has been a privilege and a real pleasure to organize this special issue or festschrift in the general field of atom-surface scattering (and its interaction) in honor of J R Manson. This is a good opportunity and an ideal place to express our deep gratitude to one of the leaders in this field for his fundamental and outstanding scientific contributions. J R Manson, or Dick to his friends and colleagues, is one of the founding fathers, together with N Cabrera and V Celli, of the 'Theory of surface scattering and detection of surface phonons'. This is the title of the very well-known first theoretical paper by Dick published in Physical Review Letters in 1969. My first meeting with Dick was around twenty years ago in Saclay. J Lapujoulade organized a small group seminar about selective adsorption resonances in metal vicinal surfaces. We discussed this important issue in surface physics and many other things as if we had always known each other. This familiarity and warm welcome struck me from the very beginning. During the coming years, I found this to be a very attractive aspect of his personality. During my stays in Göttingen, we had the opportunity to talk widely about science and life at lunch or dinner time, walking or cycling. During these nice meetings, he showed, with humility, an impressive cultural background. It is quite clear that his personal opinions about history, religion, politics, music, etc, come from considering and analyzing them as 'open dynamical systems'. In particular, with good food and better wine in a restaurant or at home, a happy cheerful soirée is guaranteed with him, or even with only a good beer or espresso, and an interesting conversation arises naturally. He likes to listen before speaking. Probably not many people know his interest in tractors. He has an incredible collection of very old tractors at home. In one of my visits to Clemson, he showed me the collection, explaining to me in great detail, their technical properties 10. Electronic Conduction through Atomic Chains, Quantum Well and Quantum Wire International Nuclear Information System (INIS) Sharma, A. C. 2011-01-01 Charge transport is dynamically and strongly linked with atomic structure, in nanostructures. We report our ab-initio calculations on electronic transport through atomic chains and the model calculations on electron-electron and electron-phonon scattering rates in presence of random impurity potential in a quantum well and in a quantum wire. We computed synthesis and ballistic transport through; (a) C and Si based atomic chains attached to metallic electrodes, (b) armchair (AC), zigzag (ZZ), mixed, rotated-AC and rotated-ZZ geometries of small molecules made of 2S, 6C and 4H atoms attaching to metallic electrodes, and (c) carbon atomic chain attached to graphene electrodes. Computed results show that synthesis of various atomic chains are practically possible and their transmission coefficients are nonzero for a wide energy range. The ab-initio calculations on electronic transport have been performed with the use of Landauer-type scattering formalism formulated in terms of Grben's functions in combination with ground-state DFT. The electron-electron and electron-phonon scattering rates have been calculated as function of excitation energy both at zero and finite temperatures for disordered 2D and 1D systems. Our model calculations suggest that electron scattering rates in a disordered system are mainly governed by effective dimensionality of a system, carrier concentration and dynamical screening effects. 11. Modified Li chains as atomic switches KAUST Repository Wunderlich, Thomas 2013-09-06 We present electronic structure and transport calculations for hydrogen and lithium chains, using density functional theory and scattering theory on the Green\\'s function level, to systematically study impurity effects on the transmission coefficient. To this end we address various impurity configurations. Tight-binding results allow us to interpret our the findings. We analyze under which circumstances impurities lead to level splitting and/or can be used to switch between metallic and insulating states. We also address the effects of strongly electronegative impurities. 12. Effect of surface parameter of interband surface mode frequencies of finite diatomic chain International Nuclear Information System (INIS) Puszkarski, H. 1982-07-01 The surface modes of a finite diatomic chain of alternating atoms (M 1 not= M 2 ) are investigated. The surface force constants are assumed to differ from the bulk ones, with the resulting surface parameter a-tilde identical on both ends of the chain. Criteria, governing the existence of interband surface (IBS) modes with frequencies lying in the forbidden gap between acoustical and optical bulk bands for natural (a = 1) as well as non-natural (a not= 1) surface defect, are analysed by the difference equation method. It is found that the IBS modes localize, depending on the value of the surface parameter a, either at the surface of lighter atoms (if a-tilde is positive), or at that of heavier atoms (if a-tilde is negative). Two, one of no IBS modes are found to exist in the chain depending on the relation between the mass ratio and surface parameter - quantities on which the surface localization increment t-tilde depends. If two modes are present (one acoustical and the other optical), their frequencies are disposed symmetrically with respect to the middle of the forbidden gap, provided the surface defect is natural, or asymmetrically - if it is other than natural. If the localization of the IBS mode exceeds a well defined critical value tsub(c), the mode frequency becomes complex, indicating that the mode undergoes a damping. A comparison of the present results and those obtained by Wallis for the diatomic chain with natural surface defect is also given. (author) 13. Atomic probes of surface structure and dynamics International Nuclear Information System (INIS) Heller, E.J.; Jonsson, H. 1992-01-01 Progress for the period Sept. 15, 1992 to Sept. 14, 1993 is discussed. Semiclassical methods that will allow much faster and more accurate three-dimensional atom--surface scattering calculations, both elastic and inelastic, are being developed. The scattering of He atoms from buckyballs is being investigated as a test problem. Somewhat more detail is given on studies of He atom scattering from defective Pt surfaces. Molecular dynamics simulations of He + and Ar + ion sputtering of Pt surfaces are also being done. He atom scattering from Xe overlayers on metal surfaces and the thermalized dissociation of H 2 on Cu(110) are being studied. (R.W.R.) 64 refs 14. Toward tailoring Majorana bound states in artificially constructed magnetic atom chains on elemental superconductors Science.gov (United States) Thorwart, Michael 2018-01-01 Realizing Majorana bound states (MBS) in condensed matter systems is a key challenge on the way toward topological quantum computing. As a promising platform, one-dimensional magnetic chains on conventional superconductors were theoretically predicted to host MBS at the chain ends. We demonstrate a novel approach to design of model-type atomic-scale systems for studying MBS using single-atom manipulation techniques. Our artificially constructed atomic Fe chains on a Re surface exhibit spin spiral states and a remarkable enhancement of the local density of states at zero energy being strongly localized at the chain ends. Moreover, the zero-energy modes at the chain ends are shown to emerge and become stabilized with increasing chain length. Tight-binding model calculations based on parameters obtained from ab initio calculations corroborate that the system resides in the topological phase. Our work opens new pathways to design MBS in atomic-scale hybrid structures as a basis for fault-tolerant topological quantum computing. PMID:29756034 15. Surface effects in quantum spin chains International Nuclear Information System (INIS) Parkinson, J B 2004-01-01 Chains of quantum spins with open ends and isotropic Heisenberg exchange are studied. By diagonalizing the Hamiltonian for chains of finite length N and obtaining all the energy eigenvalues, the magnetic susceptibility χ, the specific heat C v , and the partition function Z can be calculated exactly for these chains. The high-temperature series expansions of these are then evaluated. For χ and C v it is found that the terms in the series consist of three parts. One is the normal high-T series already known in great detail for the N → infinity ring(chain with periodic boundary conditions). The other two consist of a 'surface' term and a correction term of order (1/T) N . The surface term is found as a series up to and including (1/T) 8 for spin S = 1/2 and 1. Simple Pade approximant formulae are given to extend the range of validity below T = 1 16. Negative Differential Resistance in Atomic Carbon Chain-Graphene Junctions International Nuclear Information System (INIS) An Liping; Liu Chunmei; Liu Nianhua 2012-01-01 We investigate the electronic transport properties of atomic carbon chain-graphene junctions by using the density-functional theory combining with the non-equilibrium Green's functions. The results show that the transport properties are sensitively dependent on the contact geometry of carbon chain. From the calculated I-V curve we find negative differential resistance (NDR) in the two types of junctions. The NDR can be considered as a result of molecular orbitals moving related to the bias window. (condensed matter: electronic structure, electrical, magnetic, and optical properties) 17. Potential energy surface of alanine polypeptide chains DEFF Research Database (Denmark) Solov'yov, Ilia; Yakubovich, Alexander V.; Solov'yov, Andrey V. 2006-01-01 The multidimensional potential energy surfaces of the peptide chains consisting of three and six alanine (Ala) residues have been studied with respect to the degrees of freedom related to the twist of these molecules relative to the peptide backbone (these degrees of freedom are responsible... 18. Dynamical Negative Differential Resistance in Antiferromagnetically Coupled Few-Atom Spin Chains Science.gov (United States) Rolf-Pissarczyk, Steffen; Yan, Shichao; Malavolti, Luigi; Burgess, Jacob A. J.; McMurtrie, Gregory; Loth, Sebastian 2017-11-01 We present the appearance of negative differential resistance (NDR) in spin-dependent electron transport through a few-atom spin chain. A chain of three antiferromagnetically coupled Fe atoms (Fe trimer) was positioned on a Cu2 N /Cu (100 ) surface and contacted with the spin-polarized tip of a scanning tunneling microscope, thus coupling the Fe trimer to one nonmagnetic and one magnetic lead. Pronounced NDR appears at the low bias of 7 mV, where inelastic electron tunneling dynamically locks the atomic spin in a long-lived excited state. This causes a rapid increase of the magnetoresistance between the spin-polarized tip and Fe trimer and quenches elastic tunneling. By varying the coupling strength between the tip and Fe trimer, we find that in this transport regime the dynamic locking of the Fe trimer competes with magnetic exchange interaction, which statically forces the Fe trimer into its high-magnetoresistance state and removes the NDR. 19. Diamond surface: atomic and electronic structure International Nuclear Information System (INIS) Pate, B.B. 1984-01-01 Experimental studies of the diamond surface (with primary emphasis on the (111) surface) are presented. Aspects of the diamond surface which are addressed include (1) the electronic structure, (2) the atomic structure, and (3) the effect of termination of the lattice by foreign atoms. Limited studies of graphite are discussed for comparison with the diamond results. Experimental results from valence band and core level photoemission spectroscopy (PES), Auger electron spectroscopy (AES), low energy electron diffraction (LEED), and carbon 1s near edge x-ray absorption fine structure (NEXAFS) spectroscopy (both the total electron yield (TEY) and Auger electron yield (AEY) techniques) are used to study and characterize both the clean and hydrogenated surface. In addition, the interaction of hydrogen with the diamond surface is examined using results from vibrational high resolution low energy electron loss spectroscopy (in collaboration with Waclawski, Pierce, Swanson, and Celotta at the National Bureau of Standards) and photon stimulated ion desorption (PSID) yield at photon energies near the carbon k-edge (hv greater than or equal to 280 eV). Both EELS and PSID verify that the mechanically polished 1 x 1 surface is hydrogen terminated and also that the reconstructed surface is hydrogen free. The (111) 2 x 2/2 x 1 reconstructed surface is obtained from the hydrogenated (111) 1 x 1:H surface by annealing to approx. = 1000 0 C. We observe occupied intrinsic surface states and a surface chemical shift (0.95 +- 0.1 eV) to lower binding energy of the carbon 1s level on the hydrogen-free reconstructed surface. Atomic hydrogen is found to be reactive with the reconstructed surface, while molecular hydrogen is relatively inert. Exposure of the reconstructed surface to atomic hydrogen results in chemisorption of hydrogen and removal of the intrinsic surface state emission in and near the band gap region 20. Formation of hollow atoms above a surface Science.gov (United States) Briand, Jean Pierre; Phaneuf, Ronald; Terracol, Stephane; Xie, Zuqi 2012-06-01 Slow highly stripped ions approaching or penetrating surfaces are known to capture electrons into outer shells of the ions, leaving the innermost shells empty, and forming hollow atoms. Electron capture occurs above and below the surfaces. The existence of hollow atoms below surfaces e.g. Ar atoms whose K and L shells are empty, with all electrons lying in the M and N shells, was demonstrated in 1990 [1]. At nm above surfaces, the excited ions may not have enough time to decay before hitting the surfaces, and the formation of hollow atoms above surfaces has even been questioned [2]. To observe it, one must increase the time above the surface by decelerating the ions. We have for the first time decelerated O^7+ ions to energies as low as 1 eV/q, below the minimum energy gained by the ions due to the acceleration by their image charge. As expected, no ion backscattering (trampoline effect) above dielectric (Ge) was observed and at the lowest ion kinetic energies, most of the observed x-rays were found to be emitted by the ions after surface contact. [4pt] [1] J. P. Briand et al., Phys.Rev.Lett. 65(1990)159.[0pt] [2] J.P. Briand, AIP Conference Proceedings 215 (1990) 513. 1. Quantum quench in an atomic one-dimensional Ising chain. Science.gov (United States) Meinert, F; Mark, M J; Kirilov, E; Lauber, K; Weinmann, P; Daley, A J; Nägerl, H-C 2013-08-02 We study nonequilibrium dynamics for an ensemble of tilted one-dimensional atomic Bose-Hubbard chains after a sudden quench to the vicinity of the transition point of the Ising paramagnetic to antiferromagnetic quantum phase transition. The quench results in coherent oscillations for the orientation of effective Ising spins, detected via oscillations in the number of doubly occupied lattice sites. We characterize the quench by varying the system parameters. We report significant modification of the tunneling rate induced by interactions and show clear evidence for collective effects in the oscillatory response. 2. Superhydrophilic surfaces from short and medium chain solvo-surfactants Directory of Open Access Journals (Sweden) Valentin Romain 2013-01-01 Full Text Available Pure monoglycerides (GM-Cs and glycerol carbonate esters (GCE-Cs are two families of oleochemical molecules composed of a polar part, glycerol for GM-Cs, glycerol carbonate for GCE-Cs, and a fatty acid lipophilic part. From a chemical point of view, GM-Cs include two free oxygen atoms in the hydroxyl functions and one ester function between the fatty acid and the glycerol parts. GCE-Cs contain two blocked oxygen atoms in the cyclic carbonate backbone and three esters functions: two endocyclic in the five-membered cyclic carbonate function, one exocyclic between the fatty acid and glycerol carbonate parts. At the physico-chemical level, GMCs and GCE-Cs are multifunctional molecules with amphiphilic structures: a common hydrophobic chain to the both families and a polar head, glycerol for GMs and glycerol carbonate for GCE-Cs. Physicochemical properties depend on chain lengths, odd or even carbon numbers on the chain, and glyceryl or cyclocarbonic polar heads. The solvo-surfactant character of GM-Cs and overall GCE-Cs were discussed through the measurements of critical micellar concentration (CMC or critical aggregation concentration (CAC. These surface active glycerol esters/glycerol carbonate esters were classified following their hydrophilic/hydrophobic character correlated to their chain length (LogPoctanol/water = f(atom carbon number. Differential scanning calorimetry and optical polarized light microscopy allow us to highlight the selfassembling properties of the glycerol carbonate esters alone and in presence of water. We studied by thermal analysis the polymorphic behaviour of GCE-Cs, and the correlation between their melting points versus the chain lengths. Coupling the self-aggregation and crystallization properties, superhydrophilic surfaces were obtained by formulating GM-Cs and GCE-Cs. An efficient durable water-repellent coating of various metallic and polymeric surfaces was allowed. Such surfaces coated by self-assembled fatty acid 3. Interlocking Molecular Gear Chains Built on Surfaces. Science.gov (United States) Zhao, Rundong; Qi, Fei; Zhao, Yan-Ling; Hermann, Klaus E; Zhang, Rui-Qin; Van Hove, Michel A 2018-05-17 Periodic chains of molecular gears in which molecules couple with each other and rotate on surfaces have been previously explored by us theoretically using ab initio simulation tools. On the basis of the knowledge and experience gained about the interactions between neighboring molecular gears, we here explore the transmission of rotational motion and energy over larger distances, namely, through a longer chain of gear-like passive "slave" molecules. Such microscopic gears exhibit quite different behaviors compared to rigid cogwheels in the macroscopic world due to their structural flexibility affecting intermolecular interaction. Here, we investigate the capabilities of such gear chains and reveal the mechanisms of the transmission process in terms of both quantum-level density functional theory (DFT) and simple classical mechanics. We find that the transmission of rotation along gear chains depends strongly on the gear-gear distance: short distances can cause tilting of gears and even irregular "creep-then-jump" (or "stick-slip") motion or expulsion of gears; long gear-gear distances cause weak coupling between gears, slipping and skipping. More importantly, for transmission of rotation at intermediate gear-gear distances, our modeling clearly exhibits the relative roles of several important factors: flexibility of gear arms, axles, and supports, as well as resulting rotational delays, slippages, and thermal and other effects. These studies therefore allow better informed design of future molecular machine components involving motors, gears, axles, etc. 4. Surface Plasmon Polaritons Probed with Cold Atoms DEFF Research Database (Denmark) Kawalec, Tomasz; Sierant, Aleksandra; Panas, Roman 2017-01-01 We report on an optical mirror for cold rubidium atoms based on a repulsive dipole potential created by means of a modified recordable digital versatile disc. Using the mirror, we have determined the absolute value of the surface plasmon polariton (SPP) intensity, reaching 90 times the intensity... 5. First-principles description of atomic gold chains on Ge(001) KAUST Repository Ló pez-Moreno, S.; Muñ oz, A.; Romero, A. H.; Schwingenschlö gl, Udo 2010-01-01 We have performed density-functional theory calculations, including the spin-orbit correction, to investigate atomic gold chains on Ge(001). A set of 26 possible configurations of the Au/Ge(001) system with c(4×2) and c(8×2) symmetries is studied. Our data show that the c(4×2) order results in the lowest energy, which is not in direct agreement with recent experiments. Using total-energy calculations, we are able to explain these differences. We address the electronic band structure and apply the Tersoff-Hamann approach to correlate our data to scanning-tunneling microscopy (STM). We obtain two highly competitive structures of the atomic Au chains for which we report simulated STM images in order to clarify the composition of the experimental Au/Ge(001) surface. 6. First-principles description of atomic gold chains on Ge(001) KAUST Repository López-Moreno, S. 2010-01-25 We have performed density-functional theory calculations, including the spin-orbit correction, to investigate atomic gold chains on Ge(001). A set of 26 possible configurations of the Au/Ge(001) system with c(4×2) and c(8×2) symmetries is studied. Our data show that the c(4×2) order results in the lowest energy, which is not in direct agreement with recent experiments. Using total-energy calculations, we are able to explain these differences. We address the electronic band structure and apply the Tersoff-Hamann approach to correlate our data to scanning-tunneling microscopy (STM). We obtain two highly competitive structures of the atomic Au chains for which we report simulated STM images in order to clarify the composition of the experimental Au/Ge(001) surface. 7. Geometric stability and electronic structure of infinite and finite phosphorus atomic chains International Nuclear Information System (INIS) Qiao Jingsi; Zhou Linwei; Ji Wei 2017-01-01 One-dimensional mono- or few-atomic chains were successfully fabricated in a variety of two-dimensional materials, like graphene, BN, and transition metal dichalcogenides, which exhibit striking transport and mechanical properties. However, atomic chains of black phosphorus (BP), an emerging electronic and optoelectronic material, is yet to be investigated. Here, we comprehensively considered the geometry stability of six categories of infinite BP atomic chains, transitions among them, and their electronic structures. These categories include mono- and dual-atomic linear, armchair, and zigzag chains. Each zigzag chain was found to be the most stable in each category with the same chain width. The mono-atomic zigzag chain was predicted as a Dirac semi-metal. In addition, we proposed prototype structures of suspended and supported finite atomic chains. It was found that the zigzag chain is, again, the most stable form and could be transferred from mono-atomic armchair chains. An orientation dependence was revealed for supported armchair chains that they prefer an angle of roughly 35 ° –37 ° perpendicular to the BP edge, corresponding to the [110] direction of the substrate BP sheet. These results may promote successive research on mono- or few-atomic chains of BP and other two-dimensional materials for unveiling their unexplored physical properties. (special topic) 8. Atomic profile imaging of ceramic oxide surfaces International Nuclear Information System (INIS) Bursill, L.A.; Peng JuLin; Sellar, J.R. 1989-01-01 Atomic surface profile imaging is an electron optical technique capable of revealing directly the surface crystallography of ceramic oxides. Use of an image-intensifier with a TV camera allows fluctuations in surface morphology and surface reactivity to be recorded and analyzed using digitized image data. This paper reviews aspects of the electron optical techniques, including interpretations based upon computer-simulation image-matching techniques. An extensive range of applications is then presented for ceramic oxides of commercial interest for advanced materials applications: including uranium oxide (UO 2 ); magnesium and nickel oxide (MgO,NiO); ceramic superconductor YBa 2 Cu 3 O 6.7 ); barium titanate (BaTiO 3 ); sapphire (α-A1 2 O 3 ); haematite (α-Fe-2O 3 ); monoclinic, tetragonal and cubic monocrystalline forms of zirconia (ZrO 2 ), lead zirconium titanate (PZT + 6 mol.% NiNbO 3 ) and ZBLAN fluoride glass. Atomic scale detail has been obtained of local structures such as steps associated with vicinal surfaces, facetting parallel to stable low energy crystallographic planes, monolayer formation on certain facets, relaxation and reconstructions, oriented overgrowth of lower oxides, chemical decomposition of complex oxides into component oxides, as well as amorphous coatings. This remarkable variety of observed surface stabilization mechanisms is discussed in terms of novel double-layer electrostatic depolarization mechanisms, as well as classical concepts of the physics and chemistry of surfaces (ionization and affinity energies and work function). 46 refs., 16 figs 9. The influence of the surface atomic structure on surface diffusion International Nuclear Information System (INIS) Ghaleb, Dominique 1984-03-01 This work represents the first quantitative study of the influence of the surface atomic structure on surface diffusion (in the range: 0.2 Tf up 0.5 Tf; Tf: melting temperature of the substrate). The analysis of our results on a microscopic scale shows low formation and migration energies for adatoms; we can describe the diffusion on surfaces with a very simple model. On (110) surfaces at low temperature the diffusion is controlled by the exchange mechanism; at higher temperature direct jumps of adatoms along the channels contribute also to the diffusion process. (author) [fr 10. Shallow surface depth profiling with atomic resolution International Nuclear Information System (INIS) Xi, J.; Dastoor, P.C.; King, B.V.; O'Connor, D.J. 1999-01-01 It is possible to derive atomic layer-by-layer composition depth profiles from popular electron spectroscopic techniques, such as X-ray photoelectron spectroscopy (XPS) or Auger electron spectroscopy (AES). When ion sputtering assisted AES or XPS is used, the changes that occur during the establishment of the steady state in the sputtering process make these techniques increasingly inaccurate for depths less than 3nm. Therefore non-destructive techniques of angle-resolved XPS (ARXPS) or AES (ARAES) have to be used in this case. In this paper several data processing algorithms have been used to extract the atomic resolved depth profiles of a shallow surface (down to 1nm) from ARXPS and ARAES data 11. Effects of atomic oxygen irradiation on the surface properties of phenolphthalein poly(ether sulfone) International Nuclear Information System (INIS) Pei Xianqiang; Li Yan; Wang Qihua; Sun Xiaojun 2009-01-01 To study the effects of low earth orbit environment on the surface properties of polymers, phenolphthalein poly(ether sulfone) (PES-C) blocks were irradiated by atomic oxygen in a ground-based simulation system. The surface properties of the pristine and irradiated blocks were studied by attenuated total-reflection FTIR (FTIR-ATR), X-ray photoelectron spectroscopy (XPS), scanning electron microscope (SEM). It was found that atomic oxygen irradiation induced the destruction of PES-C molecular chains, including the scission and oxidation of PES-C molecular chains, as evidenced by FTIR and XPS results. The scission of PES-C molecular chains decreased the relative concentration of C in the surface, while the oxidation increased the relative concentration of O in the surface. The changes in surface chemical structure and composition also changed the surface morphology of the block, which shifted from smooth structure before irradiation to 'carpet-like' structure after irradiation 12. Phonon lineshapes in atom-surface scattering Energy Technology Data Exchange (ETDEWEB) MartInez-Casado, R [Department of Chemistry, Imperial College London, South Kensington, London SW7 2AZ (United Kingdom); Sanz, A S; Miret-Artes, S [Instituto de Fisica Fundamental, Consejo Superior de Investigaciones CientIficas, Serrano 123, E-28006 Madrid (Spain) 2010-08-04 Phonon lineshapes in atom-surface scattering are obtained from a simple stochastic model based on the so-called Caldeira-Leggett Hamiltonian. In this single-bath model, the excited phonon resulting from a creation or annihilation event is coupled to a thermal bath consisting of an infinite number of harmonic oscillators, namely the bath phonons. The diagonalization of the corresponding Hamiltonian leads to a renormalization of the phonon frequencies in terms of the phonon friction or damping coefficient. Moreover, when there are adsorbates on the surface, this single-bath model can be extended to a two-bath model accounting for the effect induced by the adsorbates on the phonon lineshapes as well as their corresponding lineshapes. 13. Anomalous conductance oscillations and half-metallicity in atomic Ag-O chains DEFF Research Database (Denmark) Strange, Mikkel; Thygesen, Kristian Sommer; Sethna, James P 2008-01-01 . The conductances of the chains exhibit weak even-odd oscillations around an anomalously low value of 0.1G(0) (G(0) = 2e(2)/h) which coincide with the averaged experimental conductance in the long chain limit. The unusual conductance properties are explained in terms of a resonating-chain model, which takes...... the reflection probability and phase shift of a single bulk-chain interface as the only input. The model also explains the conductance oscillations for other metallic chains.......Using spin density functional theory, we study the electronic and magnetic properties of atomically thin, suspended chains containing silver and oxygen atoms in an alternating sequence. Chains longer than 4 atoms develop a half-metallic ground state implying fully spin-polarized charge carriers... 14. A simple analytical model for electronic conductance in a one dimensional atomic chain across a defect International Nuclear Information System (INIS) Khater, Antoine; Szczesniak, Dominik 2011-01-01 An analytical model is presented for the electronic conductance in a one dimensional atomic chain across an isolated defect. The model system consists of two semi infinite lead atomic chains with the defect atom making the junction between the two leads. The calculation is based on a linear combination of atomic orbitals in the tight-binding approximation, with a single atomic one s-like orbital chosen in the present case. The matching method is used to derive analytical expressions for the scattering cross sections for the reflection and transmission processes across the defect, in the Landauer-Buttiker representation. These analytical results verify the known limits for an infinite atomic chain with no defects. The model can be applied numerically for one dimensional atomic systems supported by appropriate templates. It is also of interest since it would help establish efficient procedures for ensemble averages over a field of impurity configurations in real physical systems. 15. Self-excitation of Rydberg atoms at a metal surface DEFF Research Database (Denmark) 2017-01-01 The novel effect of self-excitation of an atomic beam propagating above a metal surface is predicted and a theory is developed. Its underlying mechanism is positive feedback provided by the reflective surface for the atomic polarization. Under certain conditions the atomic beam flying in the near...... field of the metal surface acts as an active device that supports sustained atomic dipole oscillations, which generate, in their turn, an electromagnetic field. This phenomenon does not exploit stimulated emission and therefore does not require population inversion in atoms. An experiment with Rydberg...... atoms in which this effect should be most pronounced is proposed and the necessary estimates are given.... 16. Mechanism of yttrium atom formation in electrothermal atomization from metallic and metal-carbide surfaces of a heated graphite atomizer in atomic absorption spectrometry International Nuclear Information System (INIS) Wahab, H.S.; Chakrabarti, C.L. 1981-01-01 Mechanism of Y atom formation from pyrocoated graphite, tantalum and tungsten metal surfaces of a graphite tube atomizer has been studied and a mechanism for the formation for Y atoms is proposed for the first time. (author) 17. Photoionization microscopy of hydrogen atom near a metal surface International Nuclear Information System (INIS) Yang Hai-Feng; Wang Lei; Liu Xiao-Jun; Liu Hong-Ping 2011-01-01 We have studied the ionization of Rydberg hydrogen atom near a metal surface with a semiclassical analysis of photoionization microscopy. Interference patterns of the electron radial distribution are calculated at different scaled energies above the classical saddle point and at various atom—surface distances. We find that different types of trajectories contribute predominantly to different manifolds in a certain interference pattern. As the scaled energy increases, the structure of the interference pattern evolves smoothly and more types of trajectories emerge. As the atom approaches the metal surface closer, there are more types of trajectories contributing to the interference pattern as well. When the Rydberg atom comes very close to the metal surface or the scaled energy approaches the zero field ionization energy, the potential induced by the metal surface will make atomic system chaotic. The results also show that atoms near a metal surface exhibit similar properties like the atoms in the parallel electric and magnetic fields. (atomic and molecular physics) 18. Surface functionalization of cellulose by grafting oligoether chains International Nuclear Information System (INIS) Ly, El hadji Babacar; Bras, Julien; Sadocco, Patrizia; Belgacem, Mohamed Naceur; Dufresne, Alain; Thielemans, Wim 2010-01-01 Two cellulosic substrates (Whatman paper and wood fibres) were chemically modified using different oligoether chains; poly(ethylene) (POE), poly(propylene) (PPG) and poly(tetrahydrofuran) (PTHF) glycols with different lengths were first converted into mono-NCO-terminating macromolecules to allow direct grafting to the cellulose substrates. This step was achieved by reacting the chosen oligoether with 2,4-toluene diisocyanate. The prepared macromolecular grafts were then coupled with the cellulose surface and the resulting treated substrates were fully characterized by contact angle measurements, elemental analysis, scanning electron microscopy (SEM) and X-ray photoelectron spectroscopy (XPS). Thus, all the techniques implemented showed clear evidence of successful grafting, namely: (i) when using PPG grafts, the polar contribution to the surface energy decreased from approximately 25 to virtually 0 mJ m -2 and the wettability by water decreased, as the water contact angle shifted from around 40 to above 90 o ; (ii) nitrogen atoms were detected by elemental analysis and XPS; (iii) the aliphatic carbon contents increased from 11 to about 39-50%, depending on the oligoether used; and (iv) small spheres having about 100 nm diameter were detected by SEM. Moreover, the grafted fibres were submitted to biodegradation tests which showed that they conserved their biodegradable character, although with a slower biodegradation rate. The novelty of the present paper is the direct grafting of the polymeric matrix onto the fibre surface thanks to a new modification strategy involving the use of a diisocyanate as a mediator between the matrix and the reinforcing elements. The covalently linked polymeric chains constituting the matrix could melt under heating, thus, yielding the interdiffusion of the macromolecular grafts and forming the composite. 19. Atomic carbon chains as spin-transmitters: An ab initio transport study DEFF Research Database (Denmark) Fürst, Joachim Alexander; Brandbyge, Mads; Jauho, Antti-Pekka 2010-01-01 An atomic carbon chain joining two graphene flakes was recently realized in a ground-breaking experiment by Jin et al. (Phys. Rev. Lett., 102 (2009) 205501). We present ab initio results for the electron transport properties of such chains and demonstrate complete spin-polarization of the transmi......An atomic carbon chain joining two graphene flakes was recently realized in a ground-breaking experiment by Jin et al. (Phys. Rev. Lett., 102 (2009) 205501). We present ab initio results for the electron transport properties of such chains and demonstrate complete spin... 20. The magnetism and spin-dependent electronic transport properties of boron nitride atomic chains International Nuclear Information System (INIS) An, Yipeng; Zhang, Mengjun; Wang, Tianxing; Jiao, Zhaoyong; Wu, Dapeng; Fu, Zhaoming; Wang, Kun 2016-01-01 Very recently, boron nitride atomic chains were successively prepared and observed in experiments [O. Cretu et al., ACS Nano 8, 11950 (2015)]. Herein, using a first-principles technique, we study the magnetism and spin-dependent electronic transport properties of three types of BN atomic chains whose magnetic moment is 1 μ B for B n N n−1 , 2 μ B for B n N n , and 3 μ B for B n N n+1 type atomic chains, respectively. The spin-dependent electronic transport results demonstrate that the short B n N n+1 chain presents an obvious spin-filtering effect with high spin polarization ratio (>90%) under low bias voltages. Yet, this spin-filtering effect does not occur for long B n N n+1 chains under high bias voltages and other types of BN atomic chains (B n N n−1 and B n N n ). The proposed short B n N n+1 chain is predicted to be an effective low-bias spin filters. Moreover, the length-conductance relationships of these BN atomic chains were also studied. 1. The Atomic Views of Flat Supply Chains in China Directory of Open Access Journals (Sweden) 2010-09-01 Full Text Available China's domestic supply chain networks are getting flat and unbalanced despite its spectacular growth and rise to the enviable position in the global supply chain arena in recent times. The aftermath of continued investment explosion, especially in the coastal areas of the mainland, calls for an interwoven relationship of Chinese companies with the rest of the global supply chains. However, with new information and communication technologies, the real-time problems arising from this flattened supply chains are much more complex, multifaceted and multidimensional. China needs to re-think and re-focus on better alignment to the western values and cultures while managing its global business activities. This paper discusses four recently developed enterprise models in the light of several case studies conducted recently in Australia, China and India to characterise these new flat supply chains: People-Centric, Molecular Organization, Globally Dispersed and Disaggregated Value Chain. These, apparently different but inherently similar models have a vibrant architecture and system behaviour in their core and propose an alternative approach to address challenges of unbalanced domestic flat supply chains in China and helps the Chinese manufacturers to explore an approach to embrace Western values and cultures by enlarging their sphere of influence. 2. Anomalous I-V curve for mono-atomic carbon chains International Nuclear Information System (INIS) Song Bo; Fang Haiping; Sanvito, Stefano 2010-01-01 The electronic transport properties of mono-atomic carbon chains were studied theoretically using a combination of density functional theory and the non-equilibrium Green's functions method. The I-V curves for the chains composed of an even number of atoms and attached to gold electrodes through sulfur exhibit two plateaus where the current becomes bias independent. In contrast, when the number of carbon atoms in the chain is odd, the electric current simply increases monotonically with bias. This peculiar behavior is attributed to dimerization of the chains, directly resulting from their one-dimensional nature. The finding is expected to be helpful in designing molecular devices, such as carbon-chain-based transistors and sensors, for nanoscale and biological applications. 3. Polymer coating comprising 2-methoxyethyl acrylate units synthesized by surface-initiated atom transfer radical polymerization DEFF Research Database (Denmark) 2011-01-01 Source: US2012184029A The present invention relates to preparation of a polymer coating comprising or consisting of polymer chains comprising or consisting of units of 2-methoxyethyl acrylate synthesized by Surface-Initiated Atom Transfer Radical Polymerization (SI ATRP) such as ARGET SI ATRP... 4. Observation of Atom Wave Phase Shifts Induced by Van Der Waals Atom-Surface Interactions International Nuclear Information System (INIS) Perreault, John D.; Cronin, Alexander D. 2005-01-01 The development of nanotechnology and atom optics relies on understanding how atoms behave and interact with their environment. Isolated atoms can exhibit wavelike (coherent) behavior with a corresponding de Broglie wavelength and phase which can be affected by nearby surfaces. Here an atom interferometer is used to measure the phase shift of Na atom waves induced by the walls of a 50 nm wide cavity. To our knowledge this is the first direct measurement of the de Broglie wave phase shift caused by atom-surface interactions. The magnitude of the phase shift is in agreement with that predicted by Lifshitz theory for a nonretarded van der Waals interaction. This experiment also demonstrates that atom waves can retain their coherence even when atom-surface distances are as small as 10 nm 5. The potentials and challenges of electron microscopy in the study of atomic chains Science.gov (United States) Banhart, Florian; Torre, Alessandro La; Romdhane, Ferdaous Ben; Cretu, Ovidiu 2017-04-01 The article is a brief review on the potential of transmission electron microscopy (TEM) in the investigation of atom chains which are the paradigm of a strictly one-dimensional material. After the progress of TEM in the study of new two-dimensional materials, microscopy of free-standing one-dimensional structures is a new challenge with its inherent potentials and difficulties. In-situ experiments in the TEM allowed, for the first time, to generate isolated atomic chains consisting of metals, carbon or boron nitride. Besides having delivered a solid proof for the existence of atomic chains, in-situ TEM studies also enabled us to measure the electrical properties of these fundamental linear structures. While ballistic quantum conductivity is observed in chains of metal atoms, electrical transport in chains of sp1-hybridized carbon is limited by resonant states and reflections at the contacts. Although substantial progress has been made in recent TEM studies of atom chains, fundamental questions have to be answered, concerning the structural stability of the chains, bonding states at the contacts, and the suitability for applications in nanotechnology. Contribution to the topical issue "The 16th European Microscopy Congress (EMC 2016)", edited by Richard Brydson and Pascale Bayle-Guillemaud 6. Scattering of Hyperthermal Nitrogen Atoms from the Ag(111) Surface NARCIS (Netherlands) Ueta, H.; Gleeson, M. A.; Kleyn, A. W. 2009-01-01 Measurements on scattering of hyperthermal N atoms from the Ag(111) Surface at temperatures of 500, 600, and 730 K are presented. The scattered atoms have a two-component angular distribution. One of the N components is very broad. In contrast, scattered Ar atoms exhibit only a sharp, 7. Engineering the Dynamics of Effective Spin-Chain Models for Strongly Interacting Atomic Gases DEFF Research Database (Denmark) Volosniev, A. G.; Petrosyan, D.; Valiente, M. 2015-01-01 We consider a one-dimensional gas of cold atoms with strong contact interactions and construct an effective spin-chain Hamiltonian for a two-component system. The resulting Heisenberg spin model can be engineered by manipulating the shape of the external confining potential of the atomic gas. We... 8. Reevaluation of the role of nuclear uncertainties in experiments on atomic parity violation with isotopic chains International Nuclear Information System (INIS) Derevianko, Andrei; Porsev, Sergey G. 2002-01-01 In light of new data on neutron distributions from experiments with antiprotonic atoms [Trzcinska et al., Phys. Rev. Lett. 87, 082501 (2001)], we reexamine the role of nuclear-structure uncertainties in the interpretation of measurements of parity violation in atoms using chains of isotopes of the same element. With these new nuclear data, we find an improvement in the sensitivity of isotopic chain measurements to 'new physics' beyond the standard model. We compare possible constraints on 'new physics' with the most accurate to date single-isotope probe of parity violation in the Cs atom. We conclude that presently isotopic chain experiments employing atoms with nuclear charges Z < or approx. 50 may result in more accurate tests of the weak interaction 9. Immunogold labels: cell-surface markers in atomic force microscopy NARCIS (Netherlands) Putman, Constant A.J.; Putman, C.A.J.; de Grooth, B.G.; Hansma, Paul K.; van Hulst, N.F.; Greve, Jan 1993-01-01 The feasibility of using immunogold labels as cell-surface markers in atomic force microscopy is shown in this paper. The atomic force microscope (AFM) was used to image the surface of immunogold-labeled human lymphocytes. The lymphocytes were isolated from whole blood and labeled by an indirect 10. A time-dependent density functional theory investigation of plasmon resonances of linear Au atomic chains International Nuclear Information System (INIS) Liu Dan-Dan; Zhang Hong 2011-01-01 We report theoretical studies on the plasmon resonances in linear Au atomic chains by using ab initio time-dependent density functional theory. The dipole responses are investigated each as a function of chain length. They converge into a single resonance in the longitudinal mode but split into two transverse modes. As the chain length increases, the longitudinal plasmon mode is redshifted in energy while the transverse modes shift in the opposite direction (blueshifts). In addition, the energy gap between the two transverse modes reduces with chain length increasing. We find that there are unique characteristics, different from those of other metallic chains. These characteristics are crucial to atomic-scale engineering of single-molecule sensing, optical spectroscopy, and so on. (condensed matter: electronic structure, electrical, magnetic, and optical properties) 11. Atomic clocks comparison by means of television chain International Nuclear Information System (INIS) Silva, J.M. 1974-09-01 The various methods and techniques of time and frequency dissemination are presented. One of them, the Line 10 Method, was used to compare two atomic clocks, localized in different places is a distance of more or less four-hundred kilometers. The results are compared with parallel results obtained with another method, physical transport, giving the necessary experimental basis of the applicability of the Line 10 Method in Brazil [pt 12. Electronegativity determination of individual surface atoms by atomic force microscopy Czech Academy of Sciences Publication Activity Database Onoda, J.; Ondráček, Martin; Jelínek, Pavel; Sugimoto, Y. 2017-01-01 Roč. 8, Apr (2017), 1-6, č. článku 15155. ISSN 2041-1723 R&D Projects: GA ČR(CZ) GC14-16963J Institutional support: RVO:68378271 Keywords : AFM * DFT * electronegativity * surface science Subject RIV: BM - Solid Matter Physics ; Magnetism OBOR OECD: Condensed matter physics (including formerly solid state physics, supercond.) Impact factor: 12.124, year: 2016 13. Atomization of Impinging Droplets on Superheated Superhydrophobic Surfaces Science.gov (United States) Emerson, Preston; Crockett, Julie; Maynes, Daniel 2017-11-01 Water droplets impinging smooth superheated surfaces may be characterized by dynamic vapor bubbles rising to the surface, popping, and causing a spray of tiny droplets to erupt from the droplet. This spray is called secondary atomization. Here, atomization is quantified experimentally for water droplets impinging superheated superhydrophobic surfaces. Smooth hydrophobic and superhydrophobic surfaces with varying rib and post microstructuring were explored. Each surface was placed on an aluminum heating block, and impingement events were captured with a high speed camera at 3000 fps. For consistency among tests, all events were normalized by the maximum atomization found over a range of temperatures on a smooth hydrophobic surface. An estimate of the level of atomization during an impingement event was created by quantifying the volume of fluid present in the atomization spray. Droplet diameter and Weber number were held constant, and atomization was found for a range of temperatures through the lifetime of the impinging droplet. The Leidenfrost temperature was also determined and defined to be the lowest temperature at which atomization ceases to occur. Both atomization and Leidenfrost temperature increase with decreasing pitch (distance between microstructures). 14. Multiple atomic scale solid surface interconnects for atom circuits and molecule logic gates International Nuclear Information System (INIS) Joachim, C; Martrou, D; Gauthier, S; Rezeq, M; Troadec, C; Jie Deng; Chandrasekhar, N 2010-01-01 The scientific and technical challenges involved in building the planar electrical connection of an atomic scale circuit to N electrodes (N > 2) are discussed. The practical, laboratory scale approach explored today to assemble a multi-access atomic scale precision interconnection machine is presented. Depending on the surface electronic properties of the targeted substrates, two types of machines are considered: on moderate surface band gap materials, scanning tunneling microscopy can be combined with scanning electron microscopy to provide an efficient navigation system, while on wide surface band gap materials, atomic force microscopy can be used in conjunction with optical microscopy. The size of the planar part of the circuit should be minimized on moderate band gap surfaces to avoid current leakage, while this requirement does not apply to wide band gap surfaces. These constraints impose different methods of connection, which are thoroughly discussed, in particular regarding the recent progress in single atom and molecule manipulations on a surface. 15. Electronic and transport properties of a carbon-atom chain in the core of semiconducting carbon nanotubes International Nuclear Information System (INIS) Chen Jiangwei; Yang Linfeng; Yang Huatong; Dong Jinming 2003-01-01 Using the tight-binding calculations, we have studied electronic and transport properties of the semiconducting single-walled carbon nanotubes (SSWNTs) doped by a chain of carbon-atoms, which can be well controlled by density of the encapsulated carbon atoms. When it is lower, weak coupling between the chain atoms and the tube produces flat bands near the Fermi level, which means a great possibility of superconductivity and ferromagnetism for the combined system. The weak coupling also leads to a significant conductance at the Fermi level, which is contributed by both of the tube and the encapsulated carbon-atom chain. Increasing density of the chain carbon atoms, the flat bands near the Fermi level disappear, and the current may be carried only by the carbon-atom chain, thus making the system become an ideal one-dimensional quantum wire with its conducting chain enclosed by a SWNT insulator 16. Ion beam focusing by the atomic chains of a crystal lattice International Nuclear Information System (INIS) Shulga, V.I. 1975-01-01 A study is made of the focusing of a parallel ion beam by a pair of close packed atomic chains of a crystal. The focal length of this system has been calculated to the approximation of continuous potential of chain in the general form and also for a number of specific potentials of ion-atom interactions. Ar ion beam focusing by a Cu chain pair is discusssed in detail. For this case, the focal length has been calculated as a function of ion energy using the method of computer simulation of ion trajectories in the chain field. The calculations were made on the basis of the Born-Mayer potential with various constants. A pronounced dependence of focal length on the constant in this potential has been found. (author) 17. Theoretical realization of cluster-assembled hydrogen storage materials based on terminated carbon atomic chains. Science.gov (United States) Liu, Chun-Sheng; An, Hui; Guo, Ling-Ju; Zeng, Zhi; Ju, Xin 2011-01-14 The capacity of carbon atomic chains with different terminations for hydrogen storage is studied using first-principles density functional theory calculations. Unlike the physisorption of H(2) on the H-terminated chain, we show that two Li (Na) atoms each capping one end of the odd- or even-numbered carbon chain can hold ten H(2) molecules with optimal binding energies for room temperature storage. The hybridization of the Li 2p states with the H(2)σ orbitals contributes to the H(2) adsorption. However, the binding mechanism of the H(2) molecules on Na arises only from the polarization interaction between the charged Na atom and the H(2). Interestingly, additional H(2) molecules can be bound to the carbon atoms at the chain ends due to the charge transfer between Li 2s2p (Na 3s) and C 2p states. More importantly, dimerization of these isolated metal-capped chains does not affect the hydrogen binding energy significantly. In addition, a single chain can be stabilized effectively by the C(60) fullerenes termination. With a hydrogen uptake of ∼10 wt.% on Li-coated C(60)-C(n)-C(60) (n = 5, 8), the Li(12)C(60)-C(n)-Li(12)C(60) complex, keeping the number of adsorbed H(2) molecules per Li and stabilizing the dispersion of individual Li atoms, can serve as better building blocks of polymers than the (Li(12)C(60))(2) dimer. These findings suggest a new route to design cluster-assembled hydrogen storage materials based on terminated sp carbon chains. 18. Properties of fracture surfaces of glassy polymers: chain scission versus chain pullout NARCIS (Netherlands) Fischer, H.R. 2010-01-01 Fresh fracture surfaces formed by tensile failure of craze in molded polystyrene (PS) bars have been compared with the molded surfaces of the same bars, using an atomic force microscope with a thermal probe and operated in local thermal analysis. The results indicate that molecular weight is much 19. Spin-polarized transport properties of Fe atomic chain adsorbed on zigzag graphene nanoribbons International Nuclear Information System (INIS) Zhang, Z L; Chen, Y P; Xie, Y E; Zhang, M; Zhong, J X 2011-01-01 The spin-polarized transport properties of Fe atomic chain adsorbed on zigzag graphene nanoribbons (ZGNRs) are investigated using the density-functional theory in combination with the nonequilibrium Green's function method. We find that the Fe chain has drastic effects on spin-polarized transport properties of ZGNRs compared with a single Fe atom adsorbed on the ZGNRs. When the Fe chain is adsorbed on the centre of the ZGNR, the original semiconductor transforms into metal, showing a very wide range of spin-polarized transport. Particularly, the spin polarization around the Fermi level is up to 100%. This is because the adsorbed Fe chain not only induces many localized states but also has effects on the edge states of ZGNR, which can effectively modulate the spin-polarized transports. The spin polarization of ZGNRs is sensitive to the adsorption site of the Fe chain. When the Fe chain is adsorbed on the edge of ZGNR, the spin degeneracy of conductance is completely broken. The spin polarization is found to be more pronounced because the edge state of one edge is destroyed by the additional Fe chain. These results have direct implications for the control of the spin-dependent conductance in ZGNRs with the adsorption of Fe chains. 20. Single atom self-diffusion on nickel surfaces International Nuclear Information System (INIS) Tung, R.T.; Graham, W.R. 1980-01-01 Results of a field ion microscope study of single atom self-diffusion on Ni(311), (331), (110), (111) and (100) planes are presented, including detailed information on the self-diffusion parameters on (311), (331), and (110) surfaces, and activation energies for diffusion on the (111), and (100) surfaces. Evidence is presented for the existence of two types of adsorption site and surface site geometry for single nickel atoms on the (111) surface. The presence of adsorbed hydrogen on the (110), (311), and (331) surfaces is shown to lower the onset temperature for self-diffusion on these planes. (orig.) 1. Atomic forces between noble gas atoms, alkali ions, and halogen ions for surface interactions Science.gov (United States) Wilson, J. W.; Outlaw, R. A.; Heinbockel, J. H. 1988-01-01 The components of the physical forces between noble gas atoms, alkali ions, and halogen ions are analyzed and a data base developed from analysis of the two-body potential data, the alkali-halide molecular data, and the noble gas crystal and salt crystal data. A satisfactory global fit to this molecular and crystal data is then reproduced by the model to within several percent. Surface potentials are evaluated for noble gas atoms on noble gas surfaces and salt crystal surfaces with surface tension neglected. Within this context, the noble gas surface potentials on noble gas and salt crystals are considered to be accurate to within several percent. 2. Direct observation of atoms on surfaces by scanning tunnelling microscopy International Nuclear Information System (INIS) Baldeschwieler, J.D. 1989-01-01 The scanning tunnelling microscope is a non-destructive means of achieving atomic level resolution of crystal surfaces in real space to elucidate surface structures, electronic properties and chemical composition. Scanning tunnelling microscope is a powerful, real space surface structure probe complementary to other techniques such as x-ray diffraction. 21 refs., 8 figs 3. Atomic structure of the SnO2 (110) surface International Nuclear Information System (INIS) Godin, T.J.; LaFemina, J.P. 1991-12-01 Using a tight-binding, total-energy model, we examine atomic relaxations of the ideal stoichiometric and reduced tin oxide (11) surfaces. In both cases we find a nearly bond-length conserving rumple of the top layer, and a smaller counter-relaxation of the second layer. These calculations show no evidence of surface states in the band gap for either surface 4. Generation of multipartite entangled states for chains of atoms in the framework of cavity-QED Energy Technology Data Exchange (ETDEWEB) Gonta, Denis 2010-07-07 Cavity quantum electrodynamics is a research field that studies electromagnetic fields in confined spaces and the radiative properties of atoms in such fields. Experimentally, the simplest example of such system is a single atom interacting with modes of a high-finesse resonator. Theoretically, such system bears an excellent framework for quantum information processing in which atoms and light are interpreted as bits of quantum information and their mutual interaction provides a controllable entanglement mechanism. In this thesis, we present several practical schemes for generation of multipartite entangled states for chains of atoms which pass through one or more high-finesse resonators. In the first step, we propose two schemes for generation of one- and two-dimensional cluster states of arbitrary size. These schemes are based on the resonant interaction of a chain of Rydberg atoms with one or more microwave cavities. In the second step, we propose a scheme for generation of multipartite W states. This scheme is based on the off-resonant interaction of a chain of three-level atoms with an optical cavity and a laser beam. We describe in details all the individual steps which are required to realize the proposed schemes and, moreover, we discuss several techniques to reveal the non-classical correlations associated with generated small-sized entangled states. (orig.) 5. Generation of multipartite entangled states for chains of atoms in the framework of cavity-QED International Nuclear Information System (INIS) Gonta, Denis 2010-01-01 Cavity quantum electrodynamics is a research field that studies electromagnetic fields in confined spaces and the radiative properties of atoms in such fields. Experimentally, the simplest example of such system is a single atom interacting with modes of a high-finesse resonator. Theoretically, such system bears an excellent framework for quantum information processing in which atoms and light are interpreted as bits of quantum information and their mutual interaction provides a controllable entanglement mechanism. In this thesis, we present several practical schemes for generation of multipartite entangled states for chains of atoms which pass through one or more high-finesse resonators. In the first step, we propose two schemes for generation of one- and two-dimensional cluster states of arbitrary size. These schemes are based on the resonant interaction of a chain of Rydberg atoms with one or more microwave cavities. In the second step, we propose a scheme for generation of multipartite W states. This scheme is based on the off-resonant interaction of a chain of three-level atoms with an optical cavity and a laser beam. We describe in details all the individual steps which are required to realize the proposed schemes and, moreover, we discuss several techniques to reveal the non-classical correlations associated with generated small-sized entangled states. (orig.) 6. Non-thermalization in trapped atomic ion spin chains Science.gov (United States) Hess, P. W.; Becker, P.; Kaplan, H. B.; Kyprianidis, A.; Lee, A. C.; Neyenhuis, B.; Pagano, G.; Richerme, P.; Senko, C.; Smith, J.; Tan, W. L.; Zhang, J.; Monroe, C. 2017-10-01 Linear arrays of trapped and laser-cooled atomic ions are a versatile platform for studying strongly interacting many-body quantum systems. Effective spins are encoded in long-lived electronic levels of each ion and made to interact through laser-mediated optical dipole forces. The advantages of experiments with cold trapped ions, including high spatio-temporal resolution, decoupling from the external environment and control over the system Hamiltonian, are used to measure quantum effects not always accessible in natural condensed matter samples. In this review, we highlight recent work using trapped ions to explore a variety of non-ergodic phenomena in long-range interacting spin models, effects that are heralded by the memory of out-of-equilibrium initial conditions. We observe long-lived memory in static magnetizations for quenched many-body localization and prethermalization, while memory is preserved in the periodic oscillations of a driven discrete time crystal state. This article is part of the themed issue 'Breakdown of ergodicity in quantum systems: from solids to synthetic matter'. 7. Single atom and-molecules chemisorption on solid surfaces International Nuclear Information System (INIS) Anda, E.V.; Ure, J.E.; Majlis, N. 1981-01-01 A simplified model for the microscopic interpretation of single atom and- molecules chemisorption on metallic surfaces is presented. An appropriated hamiltonian for this problem is resolved, through the Green's function formalism. (L.C.) [pt 8. Modeling noncontact atomic force microscopy resolution on corrugated surfaces Directory of Open Access Journals (Sweden) Kristen M. Burson 2012-03-01 Full Text Available Key developments in NC-AFM have generally involved atomically flat crystalline surfaces. However, many surfaces of technological interest are not atomically flat. We discuss the experimental difficulties in obtaining high-resolution images of rough surfaces, with amorphous SiO2 as a specific case. We develop a quasi-1-D minimal model for noncontact atomic force microscopy, based on van der Waals interactions between a spherical tip and the surface, explicitly accounting for the corrugated substrate (modeled as a sinusoid. The model results show an attenuation of the topographic contours by ~30% for tip distances within 5 Å of the surface. Results also indicate a deviation from the Hamaker force law for a sphere interacting with a flat surface. 9. SASP - Symposium on atomic, cluster and surface physics `94 Energy Technology Data Exchange (ETDEWEB) Maerk, T D; Schrittwieser, R; Smith, D 1994-12-31 This international symposium (Founding Chairman: W. Lindinger, Innsbruck) is one in a continuing biennial series of conferences which seeks to promote the growth of scientific knowledge and its effective exchange among scientists in the field of atomic, molecular, cluster and surface physics and related areas. The symposium deals in particular with interactions between ions, electrons, photons, atoms, molecules, and clusters and their interactions with surfaces. (author). 10. Removal of foreign atoms from a metal surface bombarded with fast atomic particles Energy Technology Data Exchange (ETDEWEB) Dolotov, S.K.; Evstigneev, S.A.; Luk' yanov, S.Yu.; Martynenko, Yu.V.; Chicherov, V.M. 1976-07-01 A metal surface coated with foreign atoms was irradiated with periodically repeating ion current pulses. The energy of the ions bombarding the target was 20 to 30 keV, and inert gas ions were used. A study of the time dependences of the current of the dislodged foreign atoms showed that the rate of their removal from the target surface is determined by the sputtering coefficient of the substrate metal. 11. Removal of foreign atoms from a metal surface bombarded with fast atomic particles International Nuclear Information System (INIS) Dolotov, S.K.; Evstigneev, S.A.; Luk'yanov, S.Yu.; Martynenko, Yu.V.; Chicherov, V.M. A metal surface coated with foreign atoms was irradiated with periodically repeating ion current pulses. The energy of the ions bombarding the target was 20 to 30 keV, and inert gas ions were used. A study of the time dependences of the current of the dislodged foreign atoms showed that the rate of their removal from the target surface is determined by the sputtering coefficient of the substrate metal 12. Atomic imaging of an InSe single-crystal surface with atomic force microscope OpenAIRE Uosaki, Kohei; Koinuma, Michio 1993-01-01 The atomic force microscope was employed to observed in air the surface atomic structure of InSe, one of III-VI compound semiconductors with layered structures. Atomic arrangements were observed in both n-type and p-type materials. The observed structures are in good agreement with those expected from bulk crystal structures. The atomic images became less clear by repeating the imaging process. Wide area imaging after the imaging of small area clearly showed that a mound was created at the sp... 13. Biomimetic surface modification of polypropylene by surface chain transfer reaction based on mussel-inspired adhesion technology and thiol chemistry Energy Technology Data Exchange (ETDEWEB) Niu, Zhijun; Zhao, Yang; Sun, Wei; Shi, Suqing, E-mail: [email protected]; Gong, Yongkuan 2016-11-15 Highlights: • Biomimetic surface modification of PP was successfully conducted by integrating mussel-inspired technology, thiol chemistry and cell outer membranes-like structures. • The resultant biomimetic surface exhibits good interface and surface stability. • The obvious suppression of protein adsorption and platelet adhesion is also achieved. • The residue thoil groups on the surface could be further functionalized. - Abstract: Biomimetic surface modification of polypropylene (PP) is conducted by surface chain transfer reaction based on the mussel-inspired versatile adhesion technology and thiol chemistry, using 2-methacryloyloxyethylphosphorylcholine (MPC) as a hydrophilic monomer mimicking the cell outer membrane structure and 2,2-azobisisobutyronitrile (AIBN) as initiator in ethanol. A layer of polydopamine (PDA) is firstly deposited onto PP surface, which not only offers good interfacial adhesion with PP, but also supplies secondary reaction sites (-NH{sub 2}) to covalently anchor thiol groups onto PP surface. Then the radical chain transfer to surface-bonded thiol groups and surface re-initiated polymerization of MPC lead to the formation of a thin layer of polymer brush (PMPC) with cell outer membrane mimetic structure on PP surface. X-ray photoelectron spectrophotometer (XPS), atomic force microscopy (AFM) and water contact angle measurements are used to characterize the PP surfaces before and after modification. The protein adsorption and platelet adhesion experiments are also employed to evaluate the interactions of PP surface with biomolecules. The results show that PMPC is successfully grafted onto PP surface. In comparison with bare PP, the resultant PP-PMPC surface exhibits greatly improved protein and platelet resistance performance, which is the contribution of both increased surface hydrophilicity and zwitterionic structure. More importantly, the residue thiol groups on PP-PMPC surface create a new pathway to further functionalize such 14. Biomimetic surface modification of polypropylene by surface chain transfer reaction based on mussel-inspired adhesion technology and thiol chemistry International Nuclear Information System (INIS) Niu, Zhijun; Zhao, Yang; Sun, Wei; Shi, Suqing; Gong, Yongkuan 2016-01-01 Highlights: • Biomimetic surface modification of PP was successfully conducted by integrating mussel-inspired technology, thiol chemistry and cell outer membranes-like structures. • The resultant biomimetic surface exhibits good interface and surface stability. • The obvious suppression of protein adsorption and platelet adhesion is also achieved. • The residue thoil groups on the surface could be further functionalized. - Abstract: Biomimetic surface modification of polypropylene (PP) is conducted by surface chain transfer reaction based on the mussel-inspired versatile adhesion technology and thiol chemistry, using 2-methacryloyloxyethylphosphorylcholine (MPC) as a hydrophilic monomer mimicking the cell outer membrane structure and 2,2-azobisisobutyronitrile (AIBN) as initiator in ethanol. A layer of polydopamine (PDA) is firstly deposited onto PP surface, which not only offers good interfacial adhesion with PP, but also supplies secondary reaction sites (-NH 2 ) to covalently anchor thiol groups onto PP surface. Then the radical chain transfer to surface-bonded thiol groups and surface re-initiated polymerization of MPC lead to the formation of a thin layer of polymer brush (PMPC) with cell outer membrane mimetic structure on PP surface. X-ray photoelectron spectrophotometer (XPS), atomic force microscopy (AFM) and water contact angle measurements are used to characterize the PP surfaces before and after modification. The protein adsorption and platelet adhesion experiments are also employed to evaluate the interactions of PP surface with biomolecules. The results show that PMPC is successfully grafted onto PP surface. In comparison with bare PP, the resultant PP-PMPC surface exhibits greatly improved protein and platelet resistance performance, which is the contribution of both increased surface hydrophilicity and zwitterionic structure. More importantly, the residue thiol groups on PP-PMPC surface create a new pathway to further functionalize such 15. Quantum reflection of fast atoms from insulator surfaces: Eikonal description Energy Technology Data Exchange (ETDEWEB) Gravielle, M S; Miraglia, J E, E-mail: [email protected], E-mail: [email protected] [Instituto de Astronomia y Fisica del Espacio, CONICET, Casilla de Correo 67, Sucursal 28, 1428 Buenos Aires (Argentina) and Dpto. de Fisica, FCEN, Universidad de Buenos Aires (Argentina) 2009-11-01 Interference effects recently observed in grazing scattering of swift atoms from insulator surfaces are studied within a distorted-wave method - the surface eikonal approximation. This approach makes use of the eikonal wave function, involving axial channeled trajectories. The theory is applied to helium atoms colliding with a LiF(001) surface along low-index crystallographic directions. The roles played by the projectile polarization and the surface rumpling are investigated, finding that both effects are important for the description of the experimental projectile distributions. 16. Catalytic behavior of ‘Pt-atomic chain encapsulated gold nanotube’: A density functional study Energy Technology Data Exchange (ETDEWEB) Nigam, Sandeep, E-mail: [email protected]; Majumder, Chiranjib [Chemistry Division, Bhabha Atomic Research Centre, Trombay, Mumbai 400 085 (India) 2016-05-23 With an aim to design novel material and explore its catalytic performance towards CO oxidation, Pt atomic chain was introduced inside gold nanotube (Au-NT). Theoretical calculations at the level of first principles formalism was carried out to investigate the atomic and electronic properties of the composite. Geometrically Pt atoms prefer to align in zig-zag fashion. Significant electronic charge transfer from inside Pt atoms to the outer wall Au atoms is observed. Interaction of O{sub 2} with Au-NT wall follows by injection of additional electronic charge in the anti-bonding orbital of oxygen molecule leading to activation of the O-O bond. Further interaction of CO molecule with the activated oxygen molecule leads to spontaneous oxidation reaction and formation of CO{sub 2}. 17. Atomic force and optical near-field microscopic investigations of polarization holographic gratings in a liquid crystalline azobenzene side-chain polyester DEFF Research Database (Denmark) Ramanujam, P.S.; Holme, N.C.R.; Hvilsted, S. 1996-01-01 Atomic force and scanning near-field optical microscopic investigations have been carried out on a polarization holographic grating recorded in an azobenzene side-chain Liquid crystalline polyester. It has been found that immediately following laser irradiation, a topographic surface grating... 18. Photodesorption of Na atoms from rough Na surfaces DEFF Research Database (Denmark) Balzer, Frank; Gerlach, R.; Manson, J.R. 1997-01-01 We investigate the desorption of Na atoms from large Na clusters deposited on dielectric surfaces. High-resolution translational energy distributions of the desorbing atoms are determined by three independent methods, two-photon laser-induced fluorescence, as well as single-photon and resonance......-enhanced two-photon ionization techniques. Upon variation of surface temperature and for different substrates (mica vs lithium fluoride) clear non-Maxwellian time-of-flight distributions are observed with a cos θ angular dependence and most probable kinetic energies below that expected of atoms desorbing from...... atoms are scattered by surface vibrations. Recent experiments providing time constants for the decay of the optical excitations in the clusters support this model. The excellent agreement between experiment and theory indicates the importance of both absorption of the laser photons via direct excitation... 19. High performance current and spin diode of atomic carbon chain between transversely symmetric ribbon electrodes. Science.gov (United States) Dong, Yao-Jun; Wang, Xue-Feng; Yang, Shuo-Wang; Wu, Xue-Mei 2014-08-21 We demonstrate that giant current and high spin rectification ratios can be achieved in atomic carbon chain devices connected between two symmetric ferromagnetic zigzag-graphene-nanoribbon electrodes. The spin dependent transport simulation is carried out by density functional theory combined with the non-equilibrium Green's function method. It is found that the transverse symmetries of the electronic wave functions in the nanoribbons and the carbon chain are critical to the spin transport modes. In the parallel magnetization configuration of two electrodes, pure spin current is observed in both linear and nonlinear regions. However, in the antiparallel configuration, the spin-up (down) current is prohibited under the positive (negative) voltage bias, which results in a spin rectification ratio of order 10(4). When edge carbon atoms are substituted with boron atoms to suppress the edge magnetization in one of the electrodes, we obtain a diode with current rectification ratio over 10(6). 20. Surface Adsorption in Nonpolarizable Atomic Models. Science.gov (United States) Whitmer, Jonathan K; Joshi, Abhijeet A; Carlton, Rebecca J; Abbott, Nicholas L; de Pablo, Juan J 2014-12-09 Many ionic solutions exhibit species-dependent properties, including surface tension and the salting-out of proteins. These effects may be loosely quantified in terms of the Hofmeister series, first identified in the context of protein solubility. Here, our interest is to develop atomistic models capable of capturing Hofmeister effects rigorously. Importantly, we aim to capture this dependence in computationally cheap "hard" ionic models, which do not exhibit dynamic polarization. To do this, we have performed an investigation detailing the effects of the water model on these properties. Though incredibly important, the role of water models in simulation of ionic solutions and biological systems is essentially unexplored. We quantify this via the ion-dependent surface attraction of the halide series (Cl, Br, I) and, in so doing, determine the relative importance of various hypothesized contributions to ionic surface free energies. Importantly, we demonstrate surface adsorption can result in hard ionic models combined with a thermodynamically accurate representation of the water molecule (TIP4Q). The effect observed in simulations of iodide is commensurate with previous calculations of the surface potential of mean force in rigid molecular dynamics and polarizable density-functional models. Our calculations are direct simulation evidence of the subtle but sensitive role of water thermodynamics in atomistic simulations. 1. Bolalipid fiber aggregation can be modulated by the introduction of sulfur atoms into the spacer chains. Science.gov (United States) Graf, Gesche; Drescher, Simon; Meister, Annette; Haramus, Vasyl M; Dobner, Bodo; Blume, Alfred 2013-03-01 2. Measuring Forces between Oxide Surfaces Using the Atomic Force Microscope DEFF Research Database (Denmark) Pedersen, Henrik Guldberg; Høj, Jakob Weiland 1996-01-01 The interactions between colloidal particles play a major role in processing of ceramics, especially in casting processes. With the Atomic Force Microscope (AFM) it is possible to measure the inter-action force between a small oxide particle (a few micron) and a surface as function of surface... 3. Effects on energetic impact of atomic clusters with surfaces International Nuclear Information System (INIS) Popok, V.N.; Vuchkovich, S.; Abdela, A.; Campbell, E.E.B. 2007-01-01 A brief state-of-the-art review in the field of cluster ion interaction with surface is presented. Cluster beams are efficient tools for manipulating agglomerates of atoms providing control over the synthesis as well as modification of surfaces on the nm-scale. The application of cluster beams for technological purposes requires knowledge of the physics of cluster-surface impact. This has some significant differences compared to monomer ion - surface interactions. The main effects of cluster-surface collisions are discussed. Recent results obtained in experiments on silicon surface nanostructuring using keV-energy implantation of inert gas cluster ions are presented and compared with molecular dynamics simulations. (authors) 4. Electronic transport in large systems through a QUAMBO-NEGF approach: Application to atomic carbon chains International Nuclear Information System (INIS) Fang, X.W.; Zhang, G.P.; Yao, Y.X.; Wang, C.Z.; Ding, Z.J.; Ho, K.M. 2011-01-01 The conductance of single-atom carbon chain (SACC) between two zigzag graphene nanoribbons (GNR) is studied by an efficient scheme utilizing tight-binding (TB) parameters generated via quasi-atomic minimal basis set orbitals (QUAMBOs) and non-equilibrium Green's function (NEGF). Large systems (SACC contains more than 50 atoms) are investigated and the electronic transport properties are found to correlate with SACC's parity. The SACCs provide a stable off or on state in broad energy region (0.1-1 eV) around Fermi energy. The off state is not sensitive to the length of SACC while the corresponding energy region decreases with the increase of the width of GNR. -- Highlights: → Graphene has many superior electronic properties. → First-principles calculation are accurate but limited to system size. → QUAMBOs construct tight-binding parameters with spatial localization, and then use divide-and-conquer method. → SACC (single carbon atom chain): structure and transport show even-odd parity, and long chains are studied. 5. Measurement of near neighbor separations of surface atoms International Nuclear Information System (INIS) Cohen, P.I. Two techniques are being developed to measure the nearest neighbor distances of atoms at the surfaces of solids. Both measures extended fine structure in the excitation probability of core level electrons which are excited by an incident electron beam. This is an important problem because the structures of most surface systems are as yet unknown, even though the location of surface atoms is the basis for any quantitative understanding of the chemistry and physics of surfaces and interfaces. These methods would allow any laboratory to make in situ determinations of surface structure in conjunction with most other laboratory probes of surfaces. Each of these two techniques has different advantages; further, the combination of the two will increase confidence in the results by reducing systematic error in the data analysis 6. Interactions of germanium atoms with silica surfaces International Nuclear Information System (INIS) Stanley, Scott K.; Coffee, Shawn S.; Ekerdt, John G. 2005-01-01 GeH 4 is thermally cracked over a hot filament depositing 0.7-15 ML Ge onto 2-7 nm SiO 2 /Si(1 0 0) at substrate temperatures of 300-970 K. Ge bonding changes are analyzed during annealing with X-ray photoelectron spectroscopy. Ge, GeH x , GeO, and GeO 2 desorption is monitored through temperature programmed desorption in the temperature range 300-1000 K. Low temperature desorption features are attributed to GeO and GeH 4 . No GeO 2 desorption is observed, but GeO 2 decomposition to Ge through high temperature pathways is seen above 750 K. Germanium oxidization results from Ge etching of the oxide substrate. With these results, explanations for the failure of conventional chemical vapor deposition to produce Ge nanocrystals on SiO 2 surfaces are proposed 7. Non-adiabatic quantum state preparation and quantum state transport in chains of Rydberg atoms Science.gov (United States) Ostmann, Maike; Minář, Jiří; Marcuzzi, Matteo; Levi, Emanuele; Lesanovsky, Igor 2017-12-01 Motivated by recent progress in the experimental manipulation of cold atoms in optical lattices, we study three different protocols for non-adiabatic quantum state preparation and state transport in chains of Rydberg atoms. The protocols we discuss are based on the blockade mechanism between atoms which, when excited to a Rydberg state, interact through a van der Waals potential, and rely on single-site addressing. Specifically, we discuss protocols for efficient creation of an antiferromagnetic GHZ state, a class of matrix product states including a so-called Rydberg crystal and for the state transport of a single-qubit quantum state between two ends of a chain of atoms. We identify system parameters allowing for the operation of the protocols on timescales shorter than the lifetime of the Rydberg states while yielding high fidelity output states. We discuss the effect of positional disorder on the resulting states and comment on limitations due to other sources of noise such as radiative decay of the Rydberg states. The proposed protocols provide a testbed for benchmarking the performance of quantum information processing platforms based on Rydberg atoms. 8. Entanglement generation between two atoms via surface modes International Nuclear Information System (INIS) Xu Jingping; Yang Yaping; Al-Amri, M.; Zhu Shiyao; Zubairy, M. Suhail 2011-01-01 We discuss the coupling of two identical atoms, separated by a metal or metamaterial slab, through surface modes. We show that the coupling through the surface modes can induce entanglement. We discuss how to control the coupling for the metal or metamaterial slab by adjusting the symmetrical and antisymmetrical property of the surface modes. We analyze the dispersion relation of the surface modes and study the parameter ranges that support the surface modes with the same properties. Our results have potential applications in quantum communication and quantum computation. 9. Collapse of Langmuir monolayer at lower surface pressure: Effect of hydrophobic chain length Energy Technology Data Exchange (ETDEWEB) Das, Kaushik, E-mail: [email protected]; Kundu, Sarathi [Physical Sciences Division, Institute of Advanced Study in Science and Technology, Vigyan Path, Paschim Boragaon, Garchuk, Guwahati, Assam 781035 (India) 2016-05-23 Long chain fatty acid molecules (e.g., stearic and behenic acids) form a monolayer on water surface in the presence of Ba{sup 2+} ions at low subphase pH (≈ 5.5) and remain as a monolayer before collapse generally occurs at higher surface pressure (π{sub c} > 50 mN/m). Monolayer formation is verified from the surface pressure vs. area per molecule (π-A) isotherms and also from the atomic force microscopy (AFM) analysis of the films deposited by single upstroke of hydrophilic Si (001) substrate through the monolayer covered water surface. At high subphase pH (≈ 9.5), barium stearate molecules form multilayer structure at lower surface pressure which is verified from the π-A isotherms and AFM analysis of the film deposited at 25 mN/m. Such monolayer to multilayer structure formation or monolayer collapse at lower surface pressure is unusual as at this surface pressure generally fatty acid salt molecules form a monolayer on the water surface. Formation of bidentate chelate coordination in the metal containing headgroups is the reason for such monolayer to multilayer transition. However, for longer chain barium behenate molecules only monolayer structure is maintained at that high subphase pH (≈ 9.5) due to the presence of relatively more tail-tail hydrophobic interaction. 10. Classical theory of atom-surface scattering: The rainbow effect Science.gov (United States) 2012-07-01 The scattering of heavy atoms and molecules from surfaces is oftentimes dominated by classical mechanics. A large body of experiments have gathered data on the angular distributions of the scattered species, their energy loss distribution, sticking probability, dependence on surface temperature and more. For many years these phenomena have been considered theoretically in the framework of the “washboard model” in which the interaction of the incident particle with the surface is described in terms of hard wall potentials. Although this class of models has helped in elucidating some of the features it left open many questions such as: true potentials are clearly not hard wall potentials, it does not provide a realistic framework for phonon scattering, and it cannot explain the incident angle and incident energy dependence of rainbow scattering, nor can it provide a consistent theory for sticking. In recent years we have been developing a classical perturbation theory approach which has provided new insight into the dynamics of atom-surface scattering. The theory includes both surface corrugation as well as interaction with surface phonons in terms of harmonic baths which are linearly coupled to the system coordinates. This model has been successful in elucidating many new features of rainbow scattering in terms of frictions and bath fluctuations or noise. It has also given new insight into the origins of asymmetry in atomic scattering from surfaces. New phenomena deduced from the theory include friction induced rainbows, energy loss rainbows, a theory of super-rainbows, and more. In this review we present the classical theory of atom-surface scattering as well as extensions and implications for semiclassical scattering and the further development of a quantum theory of surface scattering. Special emphasis is given to the inversion of scattering data into information on the particle-surface interactions. 11. Theory of inelastic effects in resonant atom-surface scattering International Nuclear Information System (INIS) Evans, D.K. 1983-01-01 The progress of theoretical and experimental developments in atom-surface scattering is briefly reviewed. The formal theory of atom-surface resonant scattering is reviewed and expanded, with both S and T matrix approaches being explained. The two-potential formalism is shown to be useful for dealing with the problem in question. A detailed theory based on the S-matrix and the two-potential formalism is presented. This theory takes account of interactions between the incident atoms and the surface phonons, with resonant effects being displayed explicitly. The Debye-Waller attenuation is also studied. The case in which the atom-surface potential is divided into an attractive part V/sub a/ and a repulsive part V/sub r/ is considered at length. Several techniques are presented for handling the scattering due to V/sub r/, for the case in which V/sub r/ is taken to be the hard corrugated surface potential. The theory is used to calculate the scattered intensities for the system 4 He/LiF(001). A detailed comparison with experiment is made, with polar scans, azimuthal scans, and time-of-flight measurements being considered. The theory is seen to explain the location and signature of resonant features, and to provide reasonable overall agreement with the experimental results 12. Evaporation and Hydrocarbon Chain Conformation of Surface Lipid Films Science.gov (United States) Sledge, Samiyyah M.; Khimji, Hussain; Borchman, Douglas; Oliver, Alexandria; Michael, Heidi; Dennis, Emily K.; Gerlach, Dylan; Bhola, Rahul; Stephen, Elsa 2016-01-01 Purpose The inhibition of the rate of evaporation (Revap) by surface lipids is relevant to reservoirs and dry eye. Our aim was to test the idea that lipid surface films inhibit Revap. Methods Revap were determined gravimetrically. Hydrocarbon chain conformation and structure were measured using a Raman microscope. Six 1-hydroxyl hydrocarbons (11–24 carbons in length) and human meibum were studied. Reflex tears were obtained from a 62-year-old male. Results The Raman scattering intensity of the lipid film deviated by about 7 % for hydroxyl lipids and varied by 21 % for meibum films across the entire film at a resolution of 5 µm2. All of the surface lipids were ordered. Revap of the shorter chain hydroxyl lipids were slightly (7%) but significantly lower compared with the longer chain hydroxyl lipids. Revap of both groups was essentially similar to that of buffer. A hydroxyl lipid film did not influence Revap over an estimated average thickness range of 0.69 to >6.9 µm. Revap of human tears and buffer with and without human meibum (34.4 µm thick) was not significantly different. Revap of human tears was not significantly different from buffer. Conclusions Human meibum and hydroxyl lipids, regardless of their fluidity, chain length, or thickness did not inhibit Revap of buffer or tears even though they completely covered the surface. It is unlikely that hydroxyl lipids can be used to inhibit Revap of reservoirs. Our data do not support the widely accepted (yet unconfirmed) idea that the tear film lipid layer inhibits Revap of tears. PMID:27395776 13. Numerical analysis of the performance of an atomic iodine laser amplifier chain International Nuclear Information System (INIS) Uchiyama, T.; Witte, K.J. 1981-05-01 The performance of an atomic iodine laser amplifier chain with output pulse powers close to 2 TW is analyzed by a numerical solution of the Maxwell-Bloch equations. Two subjects are discussed in detail. The first one refers to the pulse compression occurring in the chain as a result of saturation and some related aspects such as damage to components, self-focussing, correlation between the input and output pulse shapes, and the means of pulse shape control. The second deals with various schemes suited for achieving extraction efficiencies of about or larger than 55%. These include the single-pass and double-pass schemes, pulses with two carrier frequencies and a variation of the pulse carrier frequency. In addition, the response of the chain to a variation of those parameters which are most easily subject to change in a routine operation is investigated. (orig.) 14. Surface structure investigations using noncontact atomic force microscopy International Nuclear Information System (INIS) Kolodziej, J.J.; Such, B.; Goryl, M.; Krok, F.; Piatkowski, P.; Szymonski, M. 2006-01-01 Surfaces of several A III B V compound semiconductors (InSb, GaAs, InP, InAs) of the (0 0 1) orientation have been studied with noncontact atomic force microscopy (NC-AFM). Obtained atomically resolved patterns have been compared with structural models available in the literature. It is shown that NC-AFM is an efficient tool for imaging complex surface structures in real space. It is also demonstrated that the recent structural models of III-V compound surfaces provide a sound base for interpretation of majority of features present in recorded patterns. However, there are also many new findings revealed by the NC-AFM method that is still new experimental technique in the context of surface structure determination 15. Atom-surface interaction: Zero-point energy formalism International Nuclear Information System (INIS) Paranjape, V.V. 1985-01-01 The interaction energy between an atom and a surface formed by a polar medium is derived with use of a new approach based on the zero-point energy formalism. It is shown that the energy depends on the separation Z between the atom and the surface. With increasing Z, the energy decreases according to 1/Z 3 , while with decreasing Z the energy saturates to a finite value. It is also shown that the energy is affected by the velocity of the atom, but this correction is small. Our result for large Z is consistent with the work of Manson and Ritchie [Phys. Rev. B 29, 1084 (1984)], who follow a more traditional approach to the problem 16. Chain reaction. History of the atomic bomb; Kettenreaktion. Die Geschichte der Atombombe Energy Technology Data Exchange (ETDEWEB) Mania, Hubert 2010-07-01 Henri becquerel tracked down in 1896 a strange radiation, which was called radioactivity by Marie Curie. In the following centuries German scientists Max Planck, Albert Einstein and Werner Heisenberg presented fundamental contributions to understand processes in the atomic nucleus. At Goettingen, center of the international nuclear physics community, the American student J. Robert Oppenheimer admit to this physical research. In the beginning of 1939 the message of Otto Hahns' nuclear fission electrified researchers. The first step, unleashing atomic energy, was done. A half year later the Second World War begun. And suddenly being friend with and busily communicating physicians were devided into hostile power blocs as bearers of official secrets. The author tells in this exciting book the story of the first atomic bomb as a chain reaction of ideas, discoveries and visions, of friendships, jealousy and intrigues of scientists, adventurers and genius. (orig./GL) 17. Bloch Oscillations in the Chains of Artificial Atoms Dressed with Photons Directory of Open Access Journals (Sweden) Ilay Levie 2018-06-01 Full Text Available We present a model of one-dimensional chain of two-level artificial atoms driven with DC field and quantum light simultaneously in a strong coupling regime. The interaction of atoms with light leads to electron-photon entanglement (dressing of the atoms with light. The driving via dc field leads to the Bloch oscillations (BO in the chain of dressed atoms. We consider the mutual influence of dressing and BO and show that scenario of oscillations dramatically differs from predicted by the Jaynes-Cummings and Bloch-Zener models. We study the evolution of the population inversion, tunneling current, photon probability distribution, mean number of photons, and photon number variance, and show the influence of BO on the quantum-statistical characteristics of light. For example, the collapse-revivals picture and vacuum Rabi-oscillations are strongly modulated with Bloch frequency. As a result, quantum properties of light and degree of electron-photon entanglement become controllable via adiabatic dc field turning. On the other hand, the low-frequency tunneling current depends on the quantum light statistics (in particular, for coherent initial state it is modulated accordingly the collapse-revivals picture. The developed model is universal with respect to the physical origin of artificial atom and frequency range of atom-light interaction. The model is adapted to the 2D-heterostructures (THz frequencies, semiconductor quantum dots (optical range, and Josephson junctions (microwaves. The data for numerical simulations are taken from recently published experiments. The obtained results open a new way in quantum state engineering and nano-photonic spectroscopy. 18. Scattering of atoms by molecules adsorbed at solid surfaces International Nuclear Information System (INIS) Parra, Zaida. 1988-01-01 The formalism of collisional time-correlation functions, appropriate for scattering by many-body targets, is implemented to study energy transfer in the scattering of atoms and ions from molecules adsorbed on metal surfaces. Double differential cross-sections for the energy and angular distributions of atoms and ions scattered by a molecule adsorbed on a metal surface are derived in the limit of impulsive collisions and within a statistical model that accounts for single and double collisions. They are found to be given by the product of an effective cross-section that accounts for the probability of deflection into a solid angle times a probability per unit energy transfer. A cluster model is introduced for the vibrations of an adsorbed molecule which includes the molecular atoms, the surface atoms binding the molecule, and their nearest neighbors. The vibrational modes of CO adsorbed on a Ni(001) metal surface are obtained using two different cluster models to represent the on-top and bridge-bonding situations. A He/OC-Ni(001) potential is constructed from a strongly repulsive potential of He interacting with the oxygen atom in the CO molecule and a van der Waals attraction accounting for the He interaction with the free Ni(001) surface. A potential is presented for the Li + /OC-Ni(001) where a coulombic term is introduced to account for the image force. Trajectory studies are performed and analyzed in three dimensions to obtain effective classical cross-sections for the He/OC-Ni(001) and Li + /OC-Ni(001) systems. Results for the double differential cross-sections are presented as functions of scattering angles, energy transfer and collisional energy. Temperature dependence results are also analyzed. Extensions of the approach and inclusion of effects such as anharmonicity, collisions at lower energies, and applications of the approach to higher coverages are discussed 19. Atomic and molecular layer deposition for surface modification Energy Technology Data Exchange (ETDEWEB) Vähä-Nissi, Mika, E-mail: [email protected] [VTT Technical Research Centre of Finland, PO Box 1000, FI‐02044 VTT (Finland); Sievänen, Jenni; Salo, Erkki; Heikkilä, Pirjo; Kenttä, Eija [VTT Technical Research Centre of Finland, PO Box 1000, FI‐02044 VTT (Finland); Johansson, Leena-Sisko, E-mail: [email protected] [Aalto University, School of Chemical Technology, Department of Forest Products Technology, PO Box 16100, FI‐00076 AALTO (Finland); Koskinen, Jorma T.; Harlin, Ali [VTT Technical Research Centre of Finland, PO Box 1000, FI‐02044 VTT (Finland) 2014-06-01 Atomic and molecular layer deposition (ALD and MLD, respectively) techniques are based on repeated cycles of gas–solid surface reactions. A partial monolayer of atoms or molecules is deposited to the surface during a single deposition cycle, enabling tailored film composition in principle down to molecular resolution on ideal surfaces. Typically ALD/MLD has been used for applications where uniform and pinhole free thin film is a necessity even on 3D surfaces. However, thin – even non-uniform – atomic and molecular deposited layers can also be used to tailor the surface characteristics of different non-ideal substrates. For example, print quality of inkjet printing on polymer films and penetration of water into porous nonwovens can be adjusted with low-temperature deposited metal oxide. In addition, adhesion of extrusion coated biopolymer to inorganic oxides can be improved with a hybrid layer based on lactic acid. - Graphical abstract: Print quality of a polylactide film surface modified with atomic layer deposition prior to inkjet printing (360 dpi) with an aqueous ink. Number of printed dots illustrated as a function of 0, 5, 15 and 25 deposition cycles of trimethylaluminum and water. - Highlights: • ALD/MLD can be used to adjust surface characteristics of films and fiber materials. • Hydrophobicity after few deposition cycles of Al{sub 2}O{sub 3} due to e.g. complex formation. • Same effect on cellulosic fabrics observed with low temperature deposited TiO{sub 2}. • Different film growth and oxidation potential with different precursors. • Hybrid layer on inorganic layer can be used to improve adhesion of polymer melt. 20. Evidence for non-conservative current-induced forces in the breaking of Au and Pt atomic chains OpenAIRE Sabater, Carlos; Untiedt, Carlos; van Ruitenbeek, Jan M 2015-01-01 This experimental work aims at probing current-induced forces at the atomic scale. Specifically it addresses predictions in recent work regarding the appearance of run-away modes as a result of a combined effect of the non-conservative wind force and a ‘Berry force’. The systems we consider here are atomic chains of Au and Pt atoms, for which we investigate the distribution of break down voltage values. We observe two distinct modes of breaking for Au atomic chains. The breaking at high volta... 1. Effects of temperature and surface orientation on migration behaviours of helium atoms near tungsten surfaces Energy Technology Data Exchange (ETDEWEB) Wang, Xiaoshuang; Wu, Zhangwen; Hou, Qing, E-mail: [email protected] 2015-10-15 Molecular dynamics simulations were performed to study the dependence of migration behaviours of single helium atoms near tungsten surfaces on the surface orientation and temperature. For W{100} and W{110} surfaces, He atoms can quickly escape out near the surface without accumulation even at a temperature of 400 K. The behaviours of helium atoms can be well-described by the theory of continuous diffusion of particles in a semi-infinite medium. For a W{111} surface, the situation is complex. Different types of trap mutations occur within the neighbouring region of the W{111} surface. The trap mutations hinder the escape of He atoms, resulting in their accumulation. The probability of a He atom escaping into vacuum from a trap mutation depends on the type of the trap mutation, and the occurrence probabilities of the different types of trap mutations are dependent on the temperature. This finding suggests that the escape rate of He atoms on the W{111} surface does not show a monotonic dependence on temperature. For instance, the escape rate at T = 1500 K is lower than the rate at T = 1100 K. Our results are useful for understanding the structural evolution and He release on tungsten surfaces and for designing models in other simulation methods beyond molecular dynamics. 2. Segregation of chain ends to polymer melt surfaces and interfaces International Nuclear Information System (INIS) Zhao, W.; Zhao, X.; Rafailovich, M.H.; Sokolov, J.; Composto, R.J.; Smith, S.D.; Satkowski, M.; Russell, T.P.; Dozier, W.D.; Mansfield, T. 1993-01-01 The conformation of polymer chains in the melt near an impenetrable boundary has recently been studied by molecular dynamics and off-lattice Monte Carlo simulations. Both types of calculations show an enhancement of the chain end density within a distance of approximately two polymer segment lengths of the interface relative to the bulk. In the absence of preferential interactions between monomers and the interface, the segregation arises from minimizing the loss of conformational entropy near an impenetrable boundary; i.e., by positioning an end near the surface, only one unit rather than two is reflected. In order to obtain an experimental measure of this effect, monodisperse polystyrene (PS) chains of molecular weight 63 000 with short blocks of deuterated polystyrene (dPS) at each end were prepared. The block length was kept as short as possible, while yet producing sufficient neutron scattering contrast in order to minimize any preferential surface segregation due to isotopic effects. The synthesis was carried out via living anionic polymerization of a purified styrene monomer in cyclohexane at 60 C, utilizing sec-butyllithium as the initiator. The process was terminated using degassed methanol 3. Surface Preparation of InAs (110 Using Atomic Hydrogen Directory of Open Access Journals (Sweden) T.D. Veal 2002-06-01 Full Text Available Atomic hydrogen cleaning has been used to produce structurally and electronically damage-free InAs(110 surfaces.  X-ray photoelectron spectroscopy (XPS was used to obtain chemical composition and chemical state information about the surface, before and after the removal of the atmospheric contamination. Low energy electron diffraction (LEED and high-resolution electron-energy-loss spectroscopy (HREELS were also used, respectively, to determine the surface reconstruction and degree of surface ordering, and to probe the adsorbed contaminant vibrational modes and the collective excitations of the clean surface. Clean, ordered and stoichiometric  InAs(110-(1×1 surfaces were obtained by exposure to thermally generated atomic hydrogen at a substrate temperature as low as 400ºC.  Semi-classical dielectric theory analysis of HREEL spectra of the phonon and plasmon excitations of the clean surface indicate that no electronic damage or dopant passivation were induced by the surface preparation method. 4. Atomic-scale friction on stepped surfaces of ionic crystals. Science.gov (United States) Steiner, Pascal; Gnecco, Enrico; Krok, Franciszek; Budzioch, Janusz; Walczak, Lukasz; Konior, Jerzy; Szymonski, Marek; Meyer, Ernst 2011-05-06 We report on high-resolution friction force microscopy on a stepped NaCl(001) surface in ultrahigh vacuum. The measurements were performed on single cleavage step edges. When blunt tips are used, friction is found to increase while scanning both up and down a step edge. With atomically sharp tips, friction still increases upwards, but it decreases and even changes sign downwards. Our observations extend previous results obtained without resolving atomic features and are associated with the competition between the Schwöbel barrier and the asymmetric potential well accompanying the step edges. 5. Attractive interaction between an atom and a surface International Nuclear Information System (INIS) Manson, J.R.; Ritchie, R.H. 1983-01-01 Using a general self-energy formalism we examine the interaction between an atom and a surface. Considered in detail are deviations from the Van der Waals force due to recoil and finite velocity of the particle. Calculations for positronium near a metal surface show that for such systems recoil and velocity effects are significant even at very low energies. We also examine the mechanisms for energy exchange with the surface and calculations show that single quantum events do not always dominate the exchange rates. 8 references, 2 figures 6. Simulating evaporation of surface atoms of thorium-alloyed tungsten in strong electronic fields International Nuclear Information System (INIS) Bochkanov, P.V.; Mordyuk, V.S.; Ivanov, Yu.I. 1984-01-01 By the Monte Carlo method simulating evaporation of surface atoms of thorium - alloyed tungsten in strong electric fields is realized. The strongest evaporation of surface atoms of pure tungsten as compared with thorium-alloyed tungsten in the contentration range of thorium atoms in tungsten matrix (1.5-15%) is shown. The evaporation rate increases with thorium atoms concentration. Determined is in relative units the surface atoms evaporation rate depending on surface temperature and electric field stront 7. Pb chains on ordered Si(3 3 5) surface International Nuclear Information System (INIS) Kisiel, M.; Skrobas, K.; Zdyb, R.; Mazurek, P.; Jalochowski, M. 2007-01-01 The electronic band structure of the Si(3 3 5)-Au surface decorated with Pb atoms was studied with angle resolved photoelectron spectroscopy (ARPES) in ultra high vacuum (UHV) conditions. The photoemission spectra were measured in two perpendicular directions, along and across the steps. In the direction parallel to the step edges the ARPES spectra show strongly dispersive electron energy band while in the perpendicular direction there is no electronic dispersion at all. This confirms one-dimensional character of the system. The theoretical band dispersion calculated within a tight-binding model was fitted to that obtained from the experiment 8. Spatial dispersion in atom-surface quantum friction International Nuclear Information System (INIS) Reiche, D.; Dalvit, D. A. R.; Busch, K.; Intravaia, F. 2017-01-01 We investigate the influence of spatial dispersion on atom-surface quantum friction. We show that for atom-surface separations shorter than the carrier's mean free path within the material, the frictional force can be several orders of magnitude larger than that predicted by local optics. In addition, when taking into account spatial dispersion effects, we show that the commonly used local thermal equilibrium approximation underestimates by approximately 95% the drag force, obtained by employing the recently reported nonequilibrium fluctuation-dissipation relation for quantum friction. Unlike the treatment based on local optics, spatial dispersion in conjunction with corrections to local thermal equilibrium change not only the magnitude but also the distance scaling of quantum friction. 9. Improved density functional calculations for atoms, molecules and surfaces International Nuclear Information System (INIS) Fricke, B.; Anton, J.; Fritzsche, S.; Sarpe-Tudoran, C. 2005-01-01 The non-collinear and collinear descriptions within relativistic density functional theory is described. We present results of both non-collinear and collinear calculations for atoms, diatomic molecules, and some surface simulations. We find that the accuracy of our density functional calculations for the smaller systems is comparable to good quantum chemical calculations, and thus this method provides a sound basis for larger systems where no such comparison is possible. (author) 10. Tunable self-assembled spin chains of strongly interacting cold atoms for demonstration of reliable quantum state transfer DEFF Research Database (Denmark) Loft, N. J. S.; Marchukov, O. V.; Petrosyan, D. 2016-01-01 We have developed an efficient computational method to treat long, one-dimensional systems of strongly-interacting atoms forming self-assembled spin chains. Such systems can be used to realize many spin chain model Hamiltonians tunable by the external confining potential. As a concrete...... demonstration, we consider quantum state transfer in a Heisenberg spin chain and we show how to determine the confining potential in order to obtain nearly-perfect state transfer.... 11. Atomic structures of Cd Te and Cd Se (110) surfaces International Nuclear Information System (INIS) Watari, K.; Ferraz, A.C. 1996-01-01 Results are reported based on the self-consistent density-functional theory, within the local-density approximation using ab-initio pseudopotentials of clean Cd Te and Cd Se (110) surfaces. We analyzed the trends for the equilibrium atomic structures, and the variations of the bond angles at the II-VI (110). The calculations are sensitive to the ionicity of the materials and the results are in agreement with the arguments which predict that the relaxed zinc-blend (110) surfaces should depend on ionicity. (author). 17 refs., 1 figs., 3 tabs 12. Interaction of antihydrogen with ordinary atoms and solid surfaces Energy Technology Data Exchange (ETDEWEB) Froelich, Piotr, E-mail: [email protected]; Voronin, Alexei [P.N. Lebedev Physical Institute (Russian Federation) 2012-12-15 The characteristic features of cold atom-antiatom collisions and antiatom-surface interactions are discussed and illustrated by the results for hydrogen-antihydrogen scattering and for quantum reflection of ultracold antihydrogen from a metallic surface. We discuss in some detail the case of spin-exchange in ultracold H-bar - H collisions, exposing the interplay of Coulombic, strong and dispersive forces, and demonstrating the sensitivity of the spin-exchange cross sections to hypothetical violations of Charge-Parity-Time (CPT) symmetry. 13. Transfer matrix treatment of atomic chemisorption on transition metal surface International Nuclear Information System (INIS) Mariz, A.M.; Koiller, B. 1980-05-01 The atomic adsorption of hydrogen on paramagnetic nickel 100 surface is studied, using the Green's function formalism and the transfer matrix technique, which allows the treatment of the geometry of the system in a simple manner. Electronic correlation at the adatom orbital in a self consistent Hartree-Fock approach is incorporated. The adsorption energy, local density of states and charge transfer between the solid and the adatom are calculated for different crystal structures (sc and fcc) and adatom positions at the surface. The results are discussed in comparison with other theories and with available experimental data, with satisfactory agreement. (Author) [pt 14. Critical surface phase of α2(2 × 4) reconstructed zig-zag chains on InAs(001) Energy Technology Data Exchange (ETDEWEB) Guo, Xiang [Department of Electronic Information Science and Technology, Guizhou University, Guizhou, Guiyang 550025 (China); Zhou, Xun [Department of Electronic Information Science and Technology, Guizhou University, Guizhou, Guiyang 550025 (China); School of Physics and Electronics Science, Guizhou Normal University, Guizhou, Guiyang 550001 (China); Wang, Ji-Hong [Department of Electronic Information Science and Technology, Guizhou University, Guizhou, Guiyang 550025 (China); Luo, Zi-Jiang [Department of Electronic Information Science and Technology, Guizhou University, Guizhou, Guiyang 550025 (China); School of Education Administration, Guizhou University of Finance and Economics, Guizhou, Guiyang 550004 (China); Zhou, Qing; Liu, Ke; Hu, Ming-Zhe [Department of Electronic Information Science and Technology, Guizhou University, Guizhou, Guiyang 550025 (China); Ding, Zhao, E-mail: [email protected] [Department of Electronic Information Science and Technology, Guizhou University, Guizhou, Guiyang 550025 (China) 2014-07-01 The critical condition for InAs(001) surface phase transition has been studied, the surface phase transition of InAs(001) showed discontinuity with hysteresis cycle as a function of substrate temperature. A mixed reconstruction surface and zig-zag chain α2(2 × 4) reconstruction surface have been observed by scanning tunneling microscopy. Considering the interaction and dynamics of surface arsenic atoms, the zig-zag chains of α2(2 × 4) reconstruction were found to be actually caused by the selective adsorption and desorption of surface arsenic dimers, they played a critical role in the surface phase transition between (2 × 4) and (4 × 2). - Highlights: • Discontinuous surface phase transition phenomena on the flat InAs(001) surface • Nanoscale InAs(001) surface observed by scanning tunneling microscopy • “Zig-Zag” chains of α2(2 × 4) reconstruction • Critical role in the surface phase transition between (2 × 4) and (4 × 2) 15. Critical surface phase of α2(2 × 4) reconstructed zig-zag chains on InAs(001) International Nuclear Information System (INIS) Guo, Xiang; Zhou, Xun; Wang, Ji-Hong; Luo, Zi-Jiang; Zhou, Qing; Liu, Ke; Hu, Ming-Zhe; Ding, Zhao 2014-01-01 The critical condition for InAs(001) surface phase transition has been studied, the surface phase transition of InAs(001) showed discontinuity with hysteresis cycle as a function of substrate temperature. A mixed reconstruction surface and zig-zag chain α2(2 × 4) reconstruction surface have been observed by scanning tunneling microscopy. Considering the interaction and dynamics of surface arsenic atoms, the zig-zag chains of α2(2 × 4) reconstruction were found to be actually caused by the selective adsorption and desorption of surface arsenic dimers, they played a critical role in the surface phase transition between (2 × 4) and (4 × 2). - Highlights: • Discontinuous surface phase transition phenomena on the flat InAs(001) surface • Nanoscale InAs(001) surface observed by scanning tunneling microscopy • “Zig-Zag” chains of α2(2 × 4) reconstruction • Critical role in the surface phase transition between (2 × 4) and (4 × 2) 16. Atomic Resolution Imaging and Quantification of Chemical Functionality of Surfaces Energy Technology Data Exchange (ETDEWEB) Schwarz, Udo D. [Yale Univ., New Haven, CT (United States). Dept. of Mechanical Engineering and Materials Science; Altman, Eric I. [Yale Univ., New Haven, CT (United States). Dept. of Chemical and Environmental Engineering 2014-12-10 The work carried out from 2006-2014 under DoE support was targeted at developing new approaches to the atomic-scale characterization of surfaces that include species-selective imaging and an ability to quantify chemical surface interactions with site-specific accuracy. The newly established methods were subsequently applied to gain insight into the local chemical interactions that govern the catalytic properties of model catalysts of interest to DoE. The foundation of our work was the development of three-dimensional atomic force microscopy (3DAFM), a new measurement mode that allows the mapping of the complete surface force and energy fields with picometer resolution in space (x, y, and z) and piconewton/millielectron volts in force/energy. From this experimental platform, we further expanded by adding the simultaneous recording of tunneling current (3D-AFM/STM) using chemically well-defined tips. Through comparison with simulations, we were able to achieve precise quantification and assignment of local chemical interactions to exact positions within the lattice. During the course of the project, the novel techniques were applied to surface-oxidized copper, titanium dioxide, and silicon oxide. On these materials, defect-induced changes to the chemical surface reactivity and electronic charge density were characterized with site-specific accuracy. 17. Site-selective substitutional doping with atomic precision on stepped Al (111) surface by single-atom manipulation. Science.gov (United States) Chen, Chang; Zhang, Jinhu; Dong, Guofeng; Shao, Hezhu; Ning, Bo-Yuan; Zhao, Li; Ning, Xi-Jing; Zhuang, Jun 2014-01-01 In fabrication of nano- and quantum devices, it is sometimes critical to position individual dopants at certain sites precisely to obtain the specific or enhanced functionalities. With first-principles simulations, we propose a method for substitutional doping of individual atom at a certain position on a stepped metal surface by single-atom manipulation. A selected atom at the step of Al (111) surface could be extracted vertically with an Al trimer-apex tip, and then the dopant atom will be positioned to this site. The details of the entire process including potential energy curves are given, which suggests the reliability of the proposed single-atom doping method. 18. Chain-Branching Control of the Atomic Structure of Alkanethiol-Based Gold–Sulfur Interfaces DEFF Research Database (Denmark) Wang, Yun; Chi, Qijin; Zhang, Jingdong 2011-01-01 Density functional theory structure calculations at 0 K and simulations at 300 K of observed high-resolution in situ scanning tunneling microscopy (STM) images reveal three different atomic-interface structures for the self-assembled monolayers (SAMs) of three isomeric butanethiols on Au(111......): direct binding to the Au(111) surface without pitting, binding to adatoms above a regular surface with extensive pitting, and binding to adatoms with local surface vacancies and some pitting. Thermal motions are shown to produce some observed STM features, with a very tight energy balance controlling... 19. Evaporative cooling of cold atoms in a surface trap International Nuclear Information System (INIS) Hammes, M.; Rychtarik, D.; Grimm, R. 2001-01-01 Full text: Trapping cold atom close to a surface is a promising route for attaining a two-dimensional quantum gas. We present our gravito-optical surface trap (LOST), which consists of a horizontal evanescent-wave atom mirror in combination with a blue-detuned hollow beam for transverse confinement. Optical pre-cooling based on inelastic reflections from the evanescent wave provides good starting conditions for subsequent evaporative cooling, which can be realized by ramping down the optical potentials of the trap. Already our preliminary experiments (performed at the MPI fuer Kernphysik in Heidelberg) show a 100-fold increase in phase-space density and temperature reduction to 300 nK. Substantial further improvements can be expected in our greatly improved set-up after the recent transfer of the experiment to Innsbruck. By eliminating heating processes, optimizing the evaporation ramp, polarizing the atoms and by using an additional far red-detuned laser beam we expect to soon reach the conditions of quantum degeneracy and/or two-dimensionality. (author) 20. Semiclassical perturbation theory for diffraction in heavy atom surface scattering. Science.gov (United States) Miret-Artés, Salvador; Daon, Shauli; Pollak, Eli 2012-05-28 The semiclassical perturbation theory formalism of Hubbard and Miller [J. Chem. Phys. 78, 1801 (1983)] for atom surface scattering is used to explore the possibility of observation of heavy atom diffractive scattering. In the limit of vanishing ℏ the semiclassical theory is shown to reduce to the classical perturbation theory. The quantum diffraction pattern is sensitive to the characteristics of the beam of incoming particles. Necessary conditions for observation of quantum diffraction are derived for the angular width of the incoming beam. An analytic expression for the angular distribution as a function of the angular and momentum variance of the incoming beam is obtained. We show both analytically and through some numerical results that increasing the angular width of the incident beam leads to decoherence of the quantum diffraction peaks and one approaches the classical limit. However, the incoherence of the beam in the parallel direction does not destroy the diffraction pattern. We consider the specific example of Ar atoms scattered from a rigid LiF(100) surface. 1. Equivalence of chain conformations in the surface region of a polymer melt and a single Gaussian chain nder critical conditions NARCIS (Netherlands) Skvortsov, A.M.; Leermakers, F.A.M.; Fleer, G.J. 2013-01-01 In the melt polymer conformations are nearly ideal according to Flory's ideality hypothesis. Silberberg generalized this statement for chains in the interfacial region. We check the Silberberg argument by analyzing the conformations of a probe chain end-grafted at a solid surface in a sea of 2. Equivalence of chain conformations in the surface region of a polymer melt and a single Gaussian chain under critical conditions. Science.gov (United States) Skvortsov, A M; Leermakers, F A M; Fleer, G J 2013-08-07 In the melt polymer conformations are nearly ideal according to Flory's ideality hypothesis. Silberberg generalized this statement for chains in the interfacial region. We check the Silberberg argument by analyzing the conformations of a probe chain end-grafted at a solid surface in a sea of floating free chains of concentration φ by the self-consistent field (SCF) method. Apart from the grafting, probe chain and floating chains are identical. Most of the results were obtained for a standard SCF model with freely jointed chains on a six-choice lattice, where immediate step reversals are allowed. A few data were generated for a five-choice lattice, where such step reversals are forbidden. These coarse-grained models describe the equilibrium properties of flexible atactic polymer chains at the scale of the segment length. The concentration was varied over the whole range from φ = 0 (single grafted chain) to φ = 1 (probe chain in the melt). The number of contacts with the surface, average height of the free end and its dispersion, average loop and train length, tail size distribution, end-point and overall segment distributions were calculated for a grafted probe chain as a function of φ, for several chain lengths and substrate∕polymer interactions, which were varied from strong repulsion to strong adsorption. The computations show that the conformations of the probe chain in the melt do not depend on substrate∕polymer interactions and are very similar to the conformations of a single end-grafted chain under critical conditions, and can thus be described analytically. When the substrate∕polymer interaction is fixed at the value corresponding to critical conditions, all equilibrium properties of a probe chain are independent of φ, over the whole range from a dilute solution to the melt. We believe that the conformations of all flexible chains in the surface region of the melt are close to those of an appropriate single chain in critical conditions, provided 3. Atomic and electronic structure of surfaces theoretical foundations CERN Document Server Lannoo, Michel 1991-01-01 Surfaces and interfaces play an increasingly important role in today's solid state devices. In this book the reader is introduced, in a didactic manner, to the essential theoretical aspects of the atomic and electronic structure of surfaces and interfaces. The book does not pretend to give a complete overview of contemporary problems and methods. Instead, the authors strive to provide simple but qualitatively useful arguments that apply to a wide variety of cases. The emphasis of the book is on semiconductor surfaces and interfaces but it also includes a thorough treatment of transition metals, a general discussion of phonon dispersion curves, and examples of large computational calculations. The exercises accompanying every chapter will be of great benefit to the student. 4. Single atom anisotropic magnetoresistance on a topological insulator surface KAUST Repository 2015-03-12 © 2015 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft. We demonstrate single atom anisotropic magnetoresistance on the surface of a topological insulator, arising from the interplay between the helical spin-momentum-locked surface electronic structure and the hybridization of the magnetic adatom states. Our first-principles quantum transport calculations based on density functional theory for Mn on Bi2Se3 elucidate the underlying mechanism. We complement our findings with a two dimensional model valid for both single adatoms and magnetic clusters, which leads to a proposed device setup for experimental realization. Our results provide an explanation for the conflicting scattering experiments on magnetic adatoms on topological insulator surfaces, and reveal the real space spin texture around the magnetic impurity. 5. Transient atomic behavior and surface kinetics of GaN International Nuclear Information System (INIS) Moseley, Michael; Billingsley, Daniel; Henderson, Walter; Trybus, Elaissa; Doolittle, W. Alan 2009-01-01 An in-depth model for the transient behavior of metal atoms adsorbed on the surface of GaN is developed. This model is developed by qualitatively analyzing transient reflection high energy electron diffraction (RHEED) signals, which were recorded for a variety of growth conditions of GaN grown by molecular-beam epitaxy (MBE) using metal-modulated epitaxy (MME). Details such as the initial desorption of a nitrogen adlayer and the formation of the Ga monolayer, bilayer, and droplets are monitored using RHEED and related to Ga flux and shutter cycles. The suggested model increases the understanding of the surface kinetics of GaN, provides an indirect method of monitoring the kinetic evolution of these surfaces, and introduces a novel method of in situ growth rate determination. 6. Transient atomic behavior and surface kinetics of GaN Science.gov (United States) Moseley, Michael; Billingsley, Daniel; Henderson, Walter; Trybus, Elaissa; Doolittle, W. Alan 2009-07-01 An in-depth model for the transient behavior of metal atoms adsorbed on the surface of GaN is developed. This model is developed by qualitatively analyzing transient reflection high energy electron diffraction (RHEED) signals, which were recorded for a variety of growth conditions of GaN grown by molecular-beam epitaxy (MBE) using metal-modulated epitaxy (MME). Details such as the initial desorption of a nitrogen adlayer and the formation of the Ga monolayer, bilayer, and droplets are monitored using RHEED and related to Ga flux and shutter cycles. The suggested model increases the understanding of the surface kinetics of GaN, provides an indirect method of monitoring the kinetic evolution of these surfaces, and introduces a novel method of in situ growth rate determination. 7. Deposition of size-selected atomic clusters on surfaces International Nuclear Information System (INIS) Carroll, S.J. 1999-06-01 This dissertation presents technical developments and experimental and computational investigations concerned with the deposition of atomic clusters onto surfaces. It consists of a collection of papers, in which the main body of results are contained, and four chapters presenting a subject review, computational and experimental techniques and a summary of the results presented in full within the papers. Technical work includes the optimization of an existing gas condensation cluster source based on evaporation, and the design, construction and optimization of a new gas condensation cluster source based on RF magnetron sputtering (detailed in Paper 1). The result of cluster deposition onto surfaces is found to depend on the cluster deposition energy; three impact energy regimes are explored in this work. (1) Low energy: n clusters create a defect in the surface, which pins the cluster in place, inhibiting cluster diffusion at room temperature (Paper V). (3) High energy: > 50 eV/atom. The clusters implant into the surface. For Ag 20 -Ag 200 clusters, the implantation depth is found to scale linearly with the impact energy and inversely with the cross-sectional area of the cluster, with an offset due to energy lost to the elastic compression of the surface (Paper VI). For smaller (Ag 3 ) clusters the orientation of the cluster with respect to the surface and the precise impact site play an important role; the impact energy has to be 'focused' in order for cluster implantation to occur (Paper VII). The application of deposited clusters for the creation of Si nanostructures by plasma etching is explored in Paper VIII. (author) 8. Atomic force microscopy analysis of different surface treatments of Ti dental implant surfaces International Nuclear Information System (INIS) Bathomarco, R.V.; Solorzano, G.; Elias, C.N.; Prioli, R. 2004-01-01 The surface of commercial unalloyed titanium, used in dental implants, was analyzed by atomic force microscopy. The morphology, roughness, and surface area of the samples, submitted to mechanically-induced erosion, chemical etching and a combination of both, were compared. The results show that surface treatments strongly influence the dental implant physical and chemical properties. An analysis of the length dependence of the implant surface roughness shows that, for scan sizes larger than 50 μm, the average surface roughness is independent of the scanning length and that the surface treatments lead to average surface roughness in the range of 0.37 up to 0.48 μm. It is shown that the implant surface energy is sensitive to the titanium surface area. As the area increases there is a decrease in the surface contact angle 9. Atomic force microscopy analysis of different surface treatments of Ti dental implant surfaces Science.gov (United States) Bathomarco, Ti R. V.; Solorzano, G.; Elias, C. N.; Prioli, R. 2004-06-01 The surface of commercial unalloyed titanium, used in dental implants, was analyzed by atomic force microscopy. The morphology, roughness, and surface area of the samples, submitted to mechanically-induced erosion, chemical etching and a combination of both, were compared. The results show that surface treatments strongly influence the dental implant physical and chemical properties. An analysis of the length dependence of the implant surface roughness shows that, for scan sizes larger than 50 μm, the average surface roughness is independent of the scanning length and that the surface treatments lead to average surface roughness in the range of 0.37 up to 0.48 μm. It is shown that the implant surface energy is sensitive to the titanium surface area. As the area increases there is a decrease in the surface contact angle. 10. Fast atom diffraction for grazing scattering of Ne atoms from a LiF(0 0 1) surface International Nuclear Information System (INIS) Gravielle, M.S.; Schueller, A.; Winter, H.; Miraglia, J.E. 2011-01-01 Angular distributions of fast Ne atoms after grazing collisions with a LiF(0 0 1) surface under axial surface channeling conditions are experimentally and theoretically studied. We use the surface eikonal approximation to describe the quantum interference of scattered projectiles, while the atom-surface interaction is represented by means of a pairwise additive potential, including the polarization of the projectile atom. Experimental data serve as a benchmark to investigate the performance of the proposed potential model, analyzing the role played by the projectile polarization. 11. Fast atom diffraction for grazing scattering of Ne atoms from a LiF(0 0 1) surface Energy Technology Data Exchange (ETDEWEB) Gravielle, M.S., E-mail: [email protected] [Instituto de Astronomia y Fisica del Espacio (CONICET-UBA), Casilla de correo 67, sucursal 28 C1428EGA, Buenos Aires (Argentina); Departamento de Fisica, Fac. de Ciencias Exactas y Naturales, Universidad de Buenos Aires (Argentina); Schueller, A.; Winter, H. [Institut fuer Physik, Humboldt Universitaet zu Berlin, Newtonstrasse 15, D-12489 Berlin-Adlershof (Germany); Miraglia, J.E. [Instituto de Astronomia y Fisica del Espacio (CONICET-UBA), Casilla de correo 67, sucursal 28 C1428EGA, Buenos Aires (Argentina); Departamento de Fisica, Fac. de Ciencias Exactas y Naturales, Universidad de Buenos Aires (Argentina) 2011-06-01 Angular distributions of fast Ne atoms after grazing collisions with a LiF(0 0 1) surface under axial surface channeling conditions are experimentally and theoretically studied. We use the surface eikonal approximation to describe the quantum interference of scattered projectiles, while the atom-surface interaction is represented by means of a pairwise additive potential, including the polarization of the projectile atom. Experimental data serve as a benchmark to investigate the performance of the proposed potential model, analyzing the role played by the projectile polarization. 12. Enhanced atom mobility on the surface of a metastable film. Science.gov (United States) Picone, A; Riva, M; Fratesi, G; Brambilla, A; Bussetti, G; Finazzi, M; Duò, L; Ciccacci, F 2014-07-25 A remarkable enhancement of atomic diffusion is highlighted by scanning tunneling microscopy performed on ultrathin metastable body-centered tetragonal Co films grown on Fe(001). The films follow a nearly perfect layer-by-layer growth mode with a saturation island density strongly dependent on the layer on which the nucleation occurs, indicating a lowering of the diffusion barrier. Density functional theory calculations reveal that this phenomenon is driven by the increasing capability of the film to accommodate large deformations as the thickness approaches the limit at which a structural transition occurs. These results disclose the possibility of tuning surface diffusion dynamics and controlling cluster nucleation and self-organization. 13. Simulating atomic-scale phenomena on surfaces of unconventional superconductors Energy Technology Data Exchange (ETDEWEB) Kreisel, Andreas; Andersen, Brian [Niels Bohr Institute (Denmark); Choubey, Peayush; Hirschfeld, Peter [Univ. of Florida (United States); Berlijn, Tom [CNMS and CSMD, Oak Ridge National Laboratory (United States) 2016-07-01 Interest in atomic scale effects in superconductors has increased because of two general developments: First, the discovery of new materials as the cuprate superconductors, heavy fermion and Fe-based superconductors where the coherence length of the cooper pairs is as small to be comparable to the lattice constant, rendering small scale effects important. Second, the experimental ability to image sub-atomic features using scanning-tunneling microscopy which allows to unravel numerous physical properties of the homogeneous system such as the quasi particle excitation spectra or various types of competing order as well as properties of local disorder. On the theoretical side, the available methods are based on lattice models restricting the spatial resolution of such calculations. In the present project we combine lattice calculations using the Bogoliubov-de Gennes equations describing the superconductor with wave function information containing sub-atomic resolution obtained from ab initio approaches. This allows us to calculate phenomena on surfaces of superconductors as directly measured in scanning tunneling experiments and therefore opens the possibility to identify underlying properties of these materials and explain observed features of disorder. It will be shown how this method applies to the cuprate material Bi{sub 2}Sr{sub 2}CaCu{sub 2}O{sub 8} and a Fe based superconductor. 14. Atomization of magnesium, strontium, barium and lead nitrates on surface of graphite atomizers International Nuclear Information System (INIS) Nagdaev, V.K.; Pupyshev, A.A. 1982-01-01 Modelling of the processes on graphite surface using differential-thermal analysis and graphite core with identification of decomposition products of magnesium, strontium, barium and lead nitrates by X-ray analysis has shown that carbon promotes the formation of strontium, barium and lead carbonates. The obtained temperatures of strontium and barium carbonate decomposition to oxides agree satisfactorily with calculation ones. Magnesium nitrate does not react with carbon. Formation of strontium and barium carbonates results in considerable slowing down of the process of gaseous oxide dissociation. Lead carbonate is unstable and rapidly decomposes to oxide with subsequent reduction to free metal. Formation of magnesium, strontium and barium free atoms is connected with appearance of gaseous oxides in analytical zone. Oxide and free metal lead are present on graphite surface simultaneously 15. Carbyne from first principles: chain of C atoms, a nanorod or a nanorope. Science.gov (United States) Liu, Mingjie; Artyukhov, Vasilii I; Lee, Hoonkyung; Xu, Fangbo; Yakobson, Boris I 2013-11-26 We report an extensive study of the properties of carbyne using first-principles calculations. We investigate carbyne's mechanical response to tension, bending, and torsion deformations. Under tension, carbyne is about twice as stiff as the stiffest known materials and has an unrivaled specific strength of up to 7.5 × 10(7) N·m/kg, requiring a force of ∼10 nN to break a single atomic chain. Carbyne has a fairly large room-temperature persistence length of about 14 nm. Surprisingly, the torsional stiffness of carbyne can be zero but can be "switched on" by appropriate functional groups at the ends. Further, under appropriate termination, carbyne can be switched into a magnetic semiconductor state by mechanical twisting. We reconstruct the equivalent continuum elasticity representation, providing the full set of elastic moduli for carbyne, showing its extreme mechanical performance (e.g., a nominal Young's modulus of 32.7 TPa with an effective mechanical thickness of 0.772 Å). We also find an interesting coupling between strain and band gap of carbyne, which is strongly increased under tension, from 2.6 to 4.7 eV under a 10% strain. Finally, we study the performance of carbyne as a nanoscale electrical cable and estimate its chemical stability against self-aggregation, finding an activation barrier of 0.6 eV for the carbyne-carbyne cross-linking reaction and an equilibrium cross-link density for two parallel carbyne chains of 1 cross-link per 17 C atoms (2.2 nm). 16. Accessing the dynamics of end-grafted flexible polymer chains by atomic force-electrochemical microscopy. Theoretical modeling of the approach curves by the elastic bounded diffusion model and Monte Carlo simulations. Evidence for compression-induced lateral chain escape. Science.gov (United States) Abbou, Jeremy; Anne, Agnès; Demaille, Christophe 2006-11-16 The dynamics of a molecular layer of linear poly(ethylene glycol) (PEG) chains of molecular weight 3400, bearing at one end a ferrocene (Fc) label and thiol end-grafted at a low surface coverage onto a gold substrate, is probed using combined atomic force-electrochemical microscopy (AFM-SECM), at the scale of approximately 100 molecules. Force and current approach curves are simultaneously recorded as a force-sensing microelectrode (tip) is inserted within the approximately 10 nm thick, redox labeled, PEG chain layer. Whereas the force approach curve gives access to the structure of the compressed PEG layer, the tip-current, resulting from tip-to-substrate redox cycling of the Fc head of the chain, is controlled by chain dynamics. The elastic bounded diffusion model, which considers the motion of the Fc head as diffusion in a conformational field, complemented by Monte Carlo (MC) simulations, from which the chain conformation can be derived for any degree of confinement, allows the theoretical tip-current approach curve to be calculated. The experimental current approach curve can then be very satisfyingly reproduced by theory, down to a tip-substrate separation of approximately 2 nm, using only one adjustable parameter characterizing the chain dynamics: the effective diffusion coefficient of the chain head. At closer tip-substrate separations, an unpredicted peak is observed in the experimental current approach curve, which is shown to find its origin in a compression-induced escape of the chain from within the narrowing tip-substrate gap. MC simulations provide quantitative support for lateral chain elongation as the escape mechanism. 17. Influence of the atomic structure of crystal surfaces on the surface diffusion in medium temperature range International Nuclear Information System (INIS) Cousty, J.P. 1981-12-01 In this work, we have studied the influence of atomic structure of crystal surface on surface self-diffusion in the medium temperature range. Two ways are followed. First, we have measured, using a radiotracer method, the self-diffusion coefficient at 820 K (0.6 T melting) on copper surfaces both the structure and the cleanliness of which were stable during the experiment. We have shown that the interaction between mobile surface defects and steps can be studied through measurements of the anisotropy of surface self diffusion. Second, the behavior of an adatom and a surface vacancy is simulated via a molecular dynamics method, on several surfaces of a Lennard Jones crystal. An inventory of possible migration mechanisms of these surface defects has been drawn between 0.35 and 0.45 Tsub(m). The results obtained with both the methods point out the influence of the surface atomic structure in surface self-diffusion in the medium temperature range [fr 18. Electronic torsional sound in linear atomic chains: Chemical energy transport at 1000 km/s Energy Technology Data Exchange (ETDEWEB) Kurnosov, Arkady A.; Rubtsov, Igor V.; Maksymov, Andrii O.; Burin, Alexander L., E-mail: [email protected] [Department of Chemistry, Tulane University, New Orleans, Louisiana 70118 (United States) 2016-07-21 We investigate entirely electronic torsional vibrational modes in linear cumulene chains. The carbon nuclei of a cumulene are positioned along the primary axis so that they can participate only in the transverse and longitudinal motions. However, the interatomic electronic clouds behave as a torsion spring with remarkable torsional stiffness. The collective dynamics of these clouds can be described in terms of electronic vibrational quanta, which we name torsitons. It is shown that the group velocity of the wavepacket of torsitons is much higher than the typical speed of sound, because of the small mass of participating electrons compared to the atomic mass. For the same reason, the maximum energy of the torsitons in cumulenes is as high as a few electronvolts, while the minimum possible energy is evaluated as a few hundred wavenumbers and this minimum is associated with asymmetry of zero point atomic vibrations. Theory predictions are consistent with the time-dependent density functional theory calculations. Molecular systems for experimental evaluation of the predictions are proposed. 19. Electronic torsional sound in linear atomic chains: Chemical energy transport at 1000 km/s Science.gov (United States) Kurnosov, Arkady A.; Rubtsov, Igor V.; Maksymov, Andrii O.; Burin, Alexander L. 2016-07-01 We investigate entirely electronic torsional vibrational modes in linear cumulene chains. The carbon nuclei of a cumulene are positioned along the primary axis so that they can participate only in the transverse and longitudinal motions. However, the interatomic electronic clouds behave as a torsion spring with remarkable torsional stiffness. The collective dynamics of these clouds can be described in terms of electronic vibrational quanta, which we name torsitons. It is shown that the group velocity of the wavepacket of torsitons is much higher than the typical speed of sound, because of the small mass of participating electrons compared to the atomic mass. For the same reason, the maximum energy of the torsitons in cumulenes is as high as a few electronvolts, while the minimum possible energy is evaluated as a few hundred wavenumbers and this minimum is associated with asymmetry of zero point atomic vibrations. Theory predictions are consistent with the time-dependent density functional theory calculations. Molecular systems for experimental evaluation of the predictions are proposed. 20. Experimental studies of ions and atoms interaction with insulating surface International Nuclear Information System (INIS) Villette, J. 2000-10-01 Grazing collisions ( + , Ne + , Ne 0 , Na + on LiF (001) single crystal, an ionic insulator, are investigated by a time of flight technique. The incident beam is chopped and the scattered particles are collected on a position sensitive detector providing differential cross section while the time of flight gives the energy loss. Deflection plates allow the charge state analysis. Secondary electrons are detected in coincidence allowing direct measurements of electron emission yield, angular and energetic distribution through time of flight measurements. The target electronic structure characterized by a large band gap, governs the collisional processes: charge exchange, electronic excitations and electron emission. In particular, these studies show that the population of local target excitations surface excitons is the major contribution to the kinetic energy transfer (stopping power). Auger neutralization of Ne + and He + ions reveals the population of quasi-molecular excitons, an exciton bound on two holes. Referenced in the literature as trion. A direct energy balance determines the binding energy associated with these excited states of the surface. Besides these electronic energy loss processes, two nuclear energy loss mechanisms are characterized. These processes imply momentum transfer to individual target atoms during close binary collisions or, if the projectile is charged, to collective mode of optical phonons induced by the projectile coulomb field. The effect of the temperature on the scattering profile, the contribution of topological surface defects to the energy loss profile and to skipping motion on the surface are analyzed in view of classical trajectory simulations. (author) 1. Adsorption of chitosan onto carbonaceous surfaces and its application: atomic force microscopy study International Nuclear Information System (INIS) Tan Shengnan; Liu Zhiguo; Zu Yuangang; Fu Yujie; Xing Zhimin; Zhao Lin; Sun Tongze; Zhou Zhen 2011-01-01 The adsorption of chitosan onto highly ordered pyrolytic graphite(HOPG) surfaces and its applications have been studied by atomic force microscopy (AFM). The results indicated that chitosan topography formed on the HOPG surface significantly depends on the pH conditions and its concentration for the incubation. Under strongly acidic conditions (pH -1 , chitosan formed into uniform network structures composed of fine chains. When the solution pH was changed from 3.5 to 6.5, chitosan tends to form a thicker film. Under neutral and basic conditions, chitosan changed into spherical nanoparticles, and their sizes were increased with increasing pH. Dendritic structures have been observed when the chitosan concentration was increased up to 5 mg ml -1 . In addition, the chitosan topography can also be influenced by ionic strength and the addition of different metal ions. When 0.1 M metal ions Na + , Mg 2+ , Ca 2+ and Cu 2+ were added into the chitosan solution at pH 3.0 for the incubation, network structures, branched chains, block structures and dense networks attached with many small particles were observed, respectively. The potential applications of these chitosan structures on HOPG have been explored. Preliminary results characterized by AFM and XPS indicated that the chitosan network formed on the HOPG surface can be used for AFM lithography, selective adsorption of gold nanoparticles and DNA molecules. 2. ONE-DIMENSIONAL ORDERING OF IN ATOMS IN A CU(100) SURFACE NARCIS (Netherlands) BREEMAN, M; BARKEMA, GT; BOERMA, DO 1994-01-01 A Monte Carlo study of the ordering of In atoms embedded in the top layer of a Cu(100) surface is presented. The interaction energies between the In and Cu atoms were derived from atom-embedding calculations, with Finnis-Sinclair potentials. It was found that the interaction between In atoms in the 3. A semiflexible alternating copolymer chain adsorption on a flat and a fluctuating surface International Nuclear Information System (INIS) Mishra, Pramod Kumar 2010-01-01 A lattice model of a directed self-avoiding walk is used to investigate adsorption properties of a semiflexible alternating copolymer chain on an impenetrable flat and fluctuating surface in two (square, hexagonal and rectangular lattice) and three dimensions (cubic lattice). In the cubic lattice case the surface is two-dimensional impenetrable flat and in two dimensions the surface is a fluctuating impenetrable line (hexagonal lattice) and also flat impenetrable line (square and rectangular lattice). Walks of the copolymer chains are directed perpendicular to the plane of the surface and at a suitable value of monomer surface attraction, the copolymer chain gets adsorbed on the surface. To calculate the exact value of the monomer surface attraction, the directed walk model has been solved analytically using the generating function method to discuss results when one type of monomer of the copolymer chain has attractive, repulsive or no interaction with the surface. Results obtained in the flat surface case show that, for a stiffer copolymer chain, adsorption transition occurs at a smaller value of monomer surface attraction than a flexible copolymer chain while in the case of a fluctuating surface, the adsorption transition point is independent of bending energy of the copolymer chain. These features are similar to that of a semiflexible homopolymer chain adsorption. 4. A semiflexible alternating copolymer chain adsorption on a flat and a fluctuating surface. Science.gov (United States) Mishra, Pramod Kumar 2010-04-21 A lattice model of a directed self-avoiding walk is used to investigate adsorption properties of a semiflexible alternating copolymer chain on an impenetrable flat and fluctuating surface in two (square, hexagonal and rectangular lattice) and three dimensions (cubic lattice). In the cubic lattice case the surface is two-dimensional impenetrable flat and in two dimensions the surface is a fluctuating impenetrable line (hexagonal lattice) and also flat impenetrable line (square and rectangular lattice). Walks of the copolymer chains are directed perpendicular to the plane of the surface and at a suitable value of monomer surface attraction, the copolymer chain gets adsorbed on the surface. To calculate the exact value of the monomer surface attraction, the directed walk model has been solved analytically using the generating function method to discuss results when one type of monomer of the copolymer chain has attractive, repulsive or no interaction with the surface. Results obtained in the flat surface case show that, for a stiffer copolymer chain, adsorption transition occurs at a smaller value of monomer surface attraction than a flexible copolymer chain while in the case of a fluctuating surface, the adsorption transition point is independent of bending energy of the copolymer chain. These features are similar to that of a semiflexible homopolymer chain adsorption. 5. Characterization of polymer surface structure and surface mechanical behaviour by sum frequency generation surface vibrational spectroscopy and atomic force microscopy International Nuclear Information System (INIS) Opdahl, Aric; Koffas, Telly S; Amitay-Sadovsky, Ella; Kim, Joonyeong; Somorjai, Gabor A 2004-01-01 Sum frequency generation (SFG) vibrational spectroscopy and atomic force microscopy (AFM) have been used to study polymer surface structure and surface mechanical behaviour, specifically to study the relationships between the surface properties of polymers and their bulk compositions and the environment to which the polymer is exposed. The combination of SFG surface vibrational spectroscopy and AFM has been used to study surface segregation behaviour of polyolefin blends at the polymer/air and polymer/solid interfaces. SFG surface vibrational spectroscopy and AFM experiments have also been performed to characterize the properties of polymer/liquid and polymer/polymer interfaces, focusing on hydrogel materials. A method was developed to study the surface properties of hydrogel contact lens materials at various hydration conditions. Finally, the effect of mechanical stretching on the surface composition and surface mechanical behaviour of phase-separated polyurethanes, used in biomedical implant devices, has been studied by both SFG surface vibrational spectroscopy and AFM. (topical review) 6. Fine tuning the ionic liquid-vacuum outer atomic surface using ion mixtures. Science.gov (United States) Villar-Garcia, Ignacio J; Fearn, Sarah; Ismail, Nur L; McIntosh, Alastair J S; Lovelock, Kevin R J 2015-03-28 Ionic liquid-vacuum outer atomic surfaces can be created that are remarkably different from the bulk composition. In this communication we demonstrate, using low-energy ion scattering (LEIS), that for ionic liquid mixtures the outer atomic surface shows significantly more atoms from anions with weaker cation-anion interactions (and vice versa). 7. Functionalized polymer film surfaces via surface-initiated atom transfer radical polymerization International Nuclear Information System (INIS) Hu, Y.; Li, J.S.; Yang, W.T.; Xu, F.J. 2013-01-01 The ability to manipulate and control the surface properties of polymer films, without altering the substrate properties, is crucial to their wide-spread applications. In this work, a simple one-step method for the direct immobilization of benzyl chloride groups (as the effective atom transfer radical polymerization (ATRP) initiators) on the polymer films was developed via benzophenone-induced coupling of 4-vinylbenzyl chloride (VBC). Polyethylene (PE) and nylon films were selected as examples of polymer films to illustrate the functionalization of film surfaces via surface-initiated ATRP. Functional polymer brushes of (2-dimethylamino)ethyl methacrylate, sodium 4-styrenesulfonate, 2-hydroxyethyl methacrylate and glycidyl methacrylate, as well as their block copolymer brushes, have been prepared via surface-initiated ATRP from the VBC-coupled PE or nylon film surfaces. With the development of a simple approach to the covalent immobilization of ATRP initiators on polymer film surfaces and the inherent versatility of surface-initiated ATRP, the surface functionality of polymer films can be precisely tailored. - Highlights: ► Atom transfer radical polymerization initiators were simply immobilized. ► Different functional polymer brushes were readily prepared. ► Their block copolymer brushes were also readily prepared 8. Engineering Particle Surface Chemistry and Electrochemistry with Atomic Layer Deposition Science.gov (United States) Jackson, David Hyman Kentaro Atomic layer deposition (ALD) is a vapor phase thin film coating technique that relies on sequential pulsing of precursors that undergo self-limited surface reactions. The self- limiting reactions and gas phase diffusion of the precursors together enable the conformal coating of microstructured particles with a high degree of thickness and compositional control. ALD may be used to deposit thin films that introduce new functionalities to a particle surface. Examples of new functionalities include: chemical reactivity, a mechanically strong protective coating, and an electrically resistive layer. The coatings properties are often dependent on the bulk properties and microstructure of the particle substrate, though they usually do not affect its bulk properties or microstructure. Particle ALD finds utility in the ability to synthesize well controlled, model systems, though it is expensive due to the need for costly metal precursors that are dangerous and require special handling. Enhanced properties due to ALD coating of particles in various applications are frequently described empirically, while the details of their enhancement mechanisms often remain the focus of ongoing research in the field. This study covers the various types of particle ALD and attempts to describe them from the unifying perspective of surface science. 9. Study on the GaAs(110) surface using emitted atom spectrometry International Nuclear Information System (INIS) Gayone, J.E.; Sanchez, E.A.; Grizzi, O.; Universidad Nacional de Cuyo, Mendoza 1998-01-01 The facilities implemented at Bariloche for the ion scattering spectrometry is described, and recent examples of the technique application to determine the atomic structure and the composition of metallic and semiconductor surfaces, pure and with different adsorbates. The surface analysis technique using emitted atom spectrometry is discussed. The sensitivity to the GaAs(110) surface atomic relaxation is presented, and the kinetic of hydrogen adsorption by the mentioned surface is studied 10. Fabrication of Robust and Antifouling Superhydrophobic Surfaces via Surface-Initiated Atom Transfer Radical Polymerization. Science.gov (United States) Xue, Chao-Hua; Guo, Xiao-Jing; Ma, Jian-Zhong; Jia, Shun-Tian 2015-04-22 Superhydrophobic surfaces were fabricated via surface-initiated atom transfer radical polymerization of fluorinated methacrylates on poly(ethylene terephthalate) (PET) fabrics. The hydrophobicity of the PET fabric was systematically tunable by controlling the polymerization time. The obtained superhydrophobic fabrics showed excellent chemical robustness even after exposure to different chemicals, such as acid, base, salt, acetone, and toluene. Importantly, the fabrics maintained superhydrophobicity after 2500 abrasion cycles, 100 laundering cycles, and long time exposure to UV irradiation. Also, the surface of the superhydrophobic fabrics showed excellent antifouling properties. 11. A theoretical study of hydrogen atoms adsorption and diffusion on PuO_2 (110) surface International Nuclear Information System (INIS) Yu, H.L.; Tang, T.; Zheng, S.T.; Shi, Y.; Qiu, R.Z.; Luo, W.H.; Meng, D.Q. 2016-01-01 The mechanisms of adsorption and diffusion of hydrogen atoms on the PuO_2 (110) surface are investigated by density functional theory corrected for onsite Coulombic interactions (GGA + U). In order to find out the energetically more favorable adsorption site and optimum diffusion path, adsorption energy of atomic H on various sites and the diffusion energy barrier are derived and compared. Our results show that both chemisorption and physisorption exist for H atoms adsorption configurations on PuO_2 (110) surface. Two processes for H diffusion are investigated using the climbing nudged-elastic-band (cNEB) approach. We have identified two diffusion mechanisms, leading to migration of atomic H on the surface and diffusion from surface to subsurface. The energy barriers indicate that it is energetically more favorable for H atom to be on the surface. Hydrogen permeation through purity PuO_2 surface is mainly inhibited from hydrogen atom diffusion from surface to subsurface. - Highlights: • H atoms adsorption on PuO_2 (110) surface are investigated by GGA + U. • Both chemisorption and physisorption exist for H atoms adsorption configurations. • H atoms migration into PuO_2 (100) surface are inhibited with the barrier of 2.15 eV. • H atoms diffusion on PuO_2 (110) surface are difficult at room temperature. 12. Functionalization of vertically aligned carbon nanotubes with polystyrene via surface initiated reversible addition fragmentation chain transfer polymerization Energy Technology Data Exchange (ETDEWEB) Macdonald, Thomas; Gibson, Christopher T.; Constantopoulos, Kristina; Shapter, Joseph G. [Flinders Centre for Nanoscale Science and Technology, School of Chemical and Physical Sciences, Flinders University, GPO Box 2100, Adelaide, SA, 5001 (Australia); Ellis, Amanda V., E-mail: [email protected] [Flinders Centre for Nanoscale Science and Technology, School of Chemical and Physical Sciences, Flinders University, GPO Box 2100, Adelaide, SA, 5001 (Australia) 2012-01-15 Here we demonstrate the covalent attachment of vertically aligned (VA) acid treated single-walled carbon nanotubes (SWCNTs) onto a silicon substrate via dicyclohexylcarbodiimide (DCC) coupling chemistry. Subsequently, the pendant carboxyl moieties on the sidewalls of the VA-SWCNTs were derivatized to acyl chlorides, and then finally to bis(dithioester) moieties using a magnesium chloride dithiobenzoate salt. The bis(dithioester) moieties were then successfully shown to act as a chain transfer agent (CTA) in the reversible addition fragmentation chain transfer (RAFT) polymerization of styrene in a surface initiated 'grafting-from' process from the VA-SWCNT surface. Atomic force microscopy (AFM) verified vertical alignment of the SWCNTs and the maintenance thereof throughout the synthesis process. Finally, Raman scattering spectroscopy and AFM confirmed polystyrene functionalization. 13. Functionalization of vertically aligned carbon nanotubes with polystyrene via surface initiated reversible addition fragmentation chain transfer polymerization International Nuclear Information System (INIS) Macdonald, Thomas; Gibson, Christopher T.; Constantopoulos, Kristina; Shapter, Joseph G.; Ellis, Amanda V. 2012-01-01 Here we demonstrate the covalent attachment of vertically aligned (VA) acid treated single-walled carbon nanotubes (SWCNTs) onto a silicon substrate via dicyclohexylcarbodiimide (DCC) coupling chemistry. Subsequently, the pendant carboxyl moieties on the sidewalls of the VA-SWCNTs were derivatized to acyl chlorides, and then finally to bis(dithioester) moieties using a magnesium chloride dithiobenzoate salt. The bis(dithioester) moieties were then successfully shown to act as a chain transfer agent (CTA) in the reversible addition fragmentation chain transfer (RAFT) polymerization of styrene in a surface initiated “grafting-from” process from the VA-SWCNT surface. Atomic force microscopy (AFM) verified vertical alignment of the SWCNTs and the maintenance thereof throughout the synthesis process. Finally, Raman scattering spectroscopy and AFM confirmed polystyrene functionalization. 14. Surface microstructure of bitumen characterized by atomic force microscopy. Science.gov (United States) Yu, Xiaokong; Burnham, Nancy A; Tao, Mingjiang 2015-04-01 Bitumen, also called asphalt binder, plays important roles in many industrial applications. It is used as the primary binding agent in asphalt concrete, as a key component in damping systems such as rubber, and as an indispensable additive in paint and ink. Consisting of a large number of hydrocarbons of different sizes and polarities, together with heteroatoms and traces of metals, bitumen displays rich surface microstructures that affect its rheological properties. This paper reviews the current understanding of bitumen's surface microstructures characterized by Atomic Force Microscopy (AFM). Microstructures of bitumen develop to different forms depending on crude oil source, thermal history, and sample preparation method. While some bitumens display surface microstructures with fine domains, flake-like domains, and dendrite structuring, 'bee-structures' with wavy patterns several micrometers in diameter and tens of nanometers in height are commonly seen in other binders. Controversy exists regarding the chemical origin of the 'bee-structures', which has been related to the asphaltene fraction, the metal content, or the crystallizing waxes in bitumen. The rich chemistry of bitumen can result in complicated intermolecular associations such as coprecipitation of wax and metalloporphyrins in asphaltenes. Therefore, it is the molecular interactions among the different chemical components in bitumen, rather than a single chemical fraction, that are responsible for the evolution of bitumen's diverse microstructures, including the 'bee-structures'. Mechanisms such as curvature elasticity and surface wrinkling that explain the rippled structures observed in polymer crystals might be responsible for the formation of 'bee-structures' in bitumen. Despite the progress made on morphological characterization of bitumen using AFM, the fundamental question whether the microstructures observed on bitumen surfaces represent its bulk structure remains to be addressed. In addition 15. Dynamics of gas-surface interactions atomic-level understanding of scattering processes at surfaces CERN Document Server Díez Muniño, Ricardo 2013-01-01 This book gives a representative survey of the state of the art of research on gas-surface interactions. It provides an overview of the current understanding of gas surface dynamics and, in particular, of the reactive and non-reactive processes of atoms and small molecules at surfaces. Leading scientists in the field, both from the theoretical and the experimental sides, write in this book about their most recent advances. Surface science grew as an interdisciplinary research area over the last decades, mostly because of new experimental technologies (ultra-high vacuum, for instance), as well as because of a novel paradigm, the ‘surface science’ approach. The book describes the second transformation which is now taking place pushed by the availability of powerful quantum-mechanical theoretical methods implemented numerically. In the book, experiment and theory progress hand in hand with an unprecedented degree of accuracy and control. The book presents how modern surface science targets the atomic-level u... 16. Charge transfer rates for xenon Rydberg atoms at metal and semiconductor surfaces Energy Technology Data Exchange (ETDEWEB) Dunning, F.B. [Department of Physics and Astronomy, Rice University, MS 61, 6100 Main Street, Houston, TX 77005-1892 (United States)]. E-mail: [email protected]; Wethekam, S. [Institut fuer Physik der Humboldt-Universitaet zu Berlin, Newtonstr. 15, D-12489 Berlin (Germany); Dunham, H.R. [Department of Physics and Astronomy, Rice University, MS 61, 6100 Main Street, Houston, TX 77005-1892 (United States); Lancaster, J.C. [Department of Physics and Astronomy, Rice University, MS 61, 6100 Main Street, Houston, TX 77005-1892 (United States) 2007-05-15 Recent progress in the study of charge exchange between xenon Rydberg atoms and surfaces is reviewed. Experiments using Au(1 1 1) surfaces show that under appropriate conditions each incident atom can be detected as an ion. The ionization dynamics, however, are strongly influenced by the perturbations in the energies and structure of the atomic states that occur as the ion collection field is applied and as the atom approaches the surface. These lead to avoided crossings between different atomic levels causing the atom to successively assume the character of a number of different states and lose much of its initial identity. The effects of this mixing are discussed. Efficient surface ionization is also observed at Si(1 0 0) surfaces although the ion signal is influenced by stray fields present at the surface. 17. Atom condensation on an atomically smooth surface: Ir, Re, W, and Pd on Ir(111) International Nuclear Information System (INIS) Wang, S.C.; Ehrlich, G. 1991-01-01 The distribution of condensing metal atoms over the two types of sites present on an atomically smooth Ir(111) has been measured in a field ion microscope. For Ir, Re, W, and Pd from a thermal source, condensing on Ir(111) at ∼20 K, the atoms are randomly distributed, as expected if they condense at the first site struck 18. He atom-surface scattering: Surface dynamics of insulators, overlayers and crystal growth International Nuclear Information System (INIS) 1992-01-01 Investigations in this laboratory have focused on the surface structure and dynamics of ionic insulators and on epitaxial growth onto alkali halide crystals. In the later the homoepitaxial growth of NaCl/NaCl(001) and the heteroepitaxial growth of KBr/NaCl(001), NaCl/KBr(001) and KBr/RbCl(001) have been studied by monitoring the specular He scattering as a function of the coverage and by measuring the angular and energy distributions of the scattered He atoms. These data provide information on the surface structure, defect densities, island sizes and surface strain during the layer-by-layer growth. The temperature dependence of these measurements also provides information on the mobilities of the admolecules. He atom scattering is unique among surface probes because the low-energy, inert atoms are sensitive only to the electronic structure of the topmost surface layer and are equally applicable to all crystalline materials. It is proposed for the next year to exploit further the variety of combinations possible with the alkali halides in order to carry out a definitive study of epitaxial growth in the ionic insulators. The work completed so far, including measurements of the Bragg diffraction and surface dispersion at various stages of growth, appears to be exceptionally rich in detail, which is particularly promising for theoretical modeling. In addition, because epitaxial growth conditions over a wide range of lattice mismatches is possible with these materials, size effects in growth processes can be explored in great depth. Further, as some of the alkali halides have the CsCl structure instead of the NaCl structure, we can investigate the effects of the heteroepitaxy with materials having different lattice preferences. Finally, by using co-deposition of different alkali halides, one can investigate the formation and stability of alloys and even alkali halide superlattices 19. He atom-surface scattering: Surface dynamics of insulators, overlayers and crystal growth International Nuclear Information System (INIS) Safron, S.A.; Skofronick, J.G. 1994-01-01 This progress report describes work carried out in the study of surface structure and dynamics of ionic insulators, the microscopic interactions controlling epitaxial growth and the formation of overlayers, and energy exchange in multiphonon surface scattering. The approach used is to employ high resolution helium atom scattering to study the geometry and structural features of the surfaces. Experiments have been carried out on the surface dynamics of RbCl and preliminary studies done on CoO and NiO. Epitaxial growth and overlayer dynamics experiments on the systems NaCl/NaCl(001), KBr/NaCl(001), NaCl/KBr(001) and KBr/RbCl(001) have been performed. They have collaborated with two theoretical groups to explore models of overlayer dynamics with which to compare and to interpret their experimental results. They have carried out extensive experiments on the multiphonon scattering of helium atoms from NaCl and, particularly, LiF. Work has begun on self-assembling organic films on gold and silver surfaces (alkyl thiols/Au(111) and Ag(111)) 20. He-atom surface scattering apparatus for studies of crystalline surface dynamics. Progress report, May 1, 1985-April 30, 1986 International Nuclear Information System (INIS) 1986-01-01 The primary goal of this grant is the construction of a state-of-the-art He atom-crystal surface scattering apparatus which will be capable of measuring both elastic and inelastic scattering of He atoms from crystal surfaces of metals, semiconductors and insulators. First, the apparatus will be constructed and characterized, after which a program of studies on the surface dynamics of a variety of crystal surfaces will be started. 6 refs., 2 figs 1. SURFACE SITES AND MOBILITIES OF IN ATOMS ON A STEPPED CU(100) SURFACE STUDIED AT LOW COVERAGE NARCIS (Netherlands) BREEMAN, M; DORENBOS, G; BOERMA, DO The various surface sites of In atoms deposited to a coverage of 0.013 monolayer (ML) onto a stepped Cu(100) surface were determined with low-energy ion scattering (LEIS) as a function of deposition temperature. From the fractions of In atoms occupying different sites, observed in the temperature 2. Cold atoms near surfaces: designing potentials by sculpturing wires International Nuclear Information System (INIS) Della Pietra, Leonardo; Aigner, Simon; Hagen, Christoph vom; Lezec, Henri J; Schmiedmayer, Joerg 2005-01-01 The magnetic trapping potentials for atoms on atom chips are determined by the current flow pattern in the chip wires. By modifying the wire shape using focused ion beam nano-machining we can design specialized current flow patterns and therefore micro-design the magnetic trapping potentials. We give designs for a barrier, a quantum dot, and a double well or double barrier and show preliminary experiments with ultra cold atoms in these designed potentials 3. Evidence for non-conservative current-induced forces in the breaking of Au and Pt atomic chains. Science.gov (United States) Sabater, Carlos; Untiedt, Carlos; van Ruitenbeek, Jan M 2015-01-01 This experimental work aims at probing current-induced forces at the atomic scale. Specifically it addresses predictions in recent work regarding the appearance of run-away modes as a result of a combined effect of the non-conservative wind force and a 'Berry force'. The systems we consider here are atomic chains of Au and Pt atoms, for which we investigate the distribution of break down voltage values. We observe two distinct modes of breaking for Au atomic chains. The breaking at high voltage appears to behave as expected for regular break down by thermal excitation due to Joule heating. However, there is a low-voltage breaking mode that has characteristics expected for the mechanism of current-induced forces. Although a full comparison would require more detailed information on the individual atomic configurations, the systems we consider are very similar to those considered in recent model calculations and the comparison between experiment and theory is very encouraging for the interpretation we propose. 4. Evidence for non-conservative current-induced forces in the breaking of Au and Pt atomic chains Directory of Open Access Journals (Sweden) Carlos Sabater 2015-12-01 Full Text Available This experimental work aims at probing current-induced forces at the atomic scale. Specifically it addresses predictions in recent work regarding the appearance of run-away modes as a result of a combined effect of the non-conservative wind force and a ‘Berry force’. The systems we consider here are atomic chains of Au and Pt atoms, for which we investigate the distribution of break down voltage values. We observe two distinct modes of breaking for Au atomic chains. The breaking at high voltage appears to behave as expected for regular break down by thermal excitation due to Joule heating. However, there is a low-voltage breaking mode that has characteristics expected for the mechanism of current-induced forces. Although a full comparison would require more detailed information on the individual atomic configurations, the systems we consider are very similar to those considered in recent model calculations and the comparison between experiment and theory is very encouraging for the interpretation we propose. 5. Surface diffusion of carbon atom and carbon dimer on Si(0 0 1) surface International Nuclear Information System (INIS) Zhu, J.; Pan, Z.Y.; Wang, Y.X.; Wei, Q.; Zang, L.K.; Zhou, L.; Liu, T.J.; Jiang, X.M. 2007-01-01 Carbon (C) atom and carbon dimer (C2) are known to be the main projectiles in the deposition of diamond-like carbon (DLC) films. The adsorption and diffusion of the C adatom and addimer (C2) on the fully relaxed Si(0 0 1)-(2 x 1) surface was studied by a combination of the molecular dynamics (MD) and Monte Carlo (MC) simulation. The adsorption sites of the C and C2 on the surface and the potential barriers between these sites were first determined using the semi-empirical many-body Brenner and Tersoff potential. We then estimated their hopping rates and traced their pathways. It is found that the diffusion of both C and C2 is strongly anisotropic in nature. In addition, the C adatom can diffuse a long distance on the surface while the adsorbed C2 is more likely to be confined in a local region. Thus we can expect that smoother films will be formed on the Si(0 0 1) surface with single C atoms as projectile at moderate temperature, while with C2 the films will grow in two-dimensional islands. In addition, relatively higher kinetic energy of the projectile, say, a few tens of eV, is needed to grow DLC films of higher quality. This is consistent with experimental findings 6. Detecting onset of chain scission and crosslinking of {gamma}-ray irradiated elastomer surfaces using frictional force microscopy Energy Technology Data Exchange (ETDEWEB) Banerjee, S [Materials Science Division, Indira Gandhi Centre for Atomic Research, Kalpakkam 603102 TN (India); Sinha, N K [Innovative Design Engineering and Synthesis Section, Indira Gandhi Centre for Atomic Research, Kalpakkam 603102 TN (India); Gayathri, N [Materials Science Division, Indira Gandhi Centre for Atomic Research, Kalpakkam 603102 TN (India); Ponraju, D [Radiological Safety Division, Indira Gandhi Centre for Atomic Research, Kalpakkam 603102 TN (India); Dash, S [Materials Science Division, Indira Gandhi Centre for Atomic Research, Kalpakkam 603102 TN (India); Tyagi, A K [Materials Science Division, Indira Gandhi Centre for Atomic Research, Kalpakkam 603102 TN (India); Raj, Baldev [Materials Science Division, Indira Gandhi Centre for Atomic Research, Kalpakkam 603102 TN (India) 2007-02-07 We report here that atomic force microscopy (AFM) in frictional force mode can be used to detect the onset of chain scission and crosslinking in polymeric and macromolecular samples upon irradiation. A systematic investigation to detect chain scission and crosslinking of two elastomers (1) ethylene-propylene-diene monomer rubber and (2) fluorocarbon rubber, upon {gamma}-ray irradiation has been carried out using frictional force microscopy (FFM). From the AFM results we observed that both the elastomers show a systematic smoothening of its surfaces, as the {gamma}-ray dose rate increases. However, the frictional property studied using FFM of the sample surfaces show an initial increase and then a decrease as a function of dose rate. This behaviour of increase in its frictional property has been attributed to the onset of chain scission, and the subsequent decrease in friction has been attributed to the onset of crosslinking of the polymer chains. The evaluated qualitative and semi-quantitative changes observed in the overall frictional property as a function of the {gamma}-ray dose rate for the two elastomers are presented in this paper. 7. Detecting onset of chain scission and crosslinking of γ-ray irradiated elastomer surfaces using frictional force microscopy International Nuclear Information System (INIS) Banerjee, S; Sinha, N K; Gayathri, N; Ponraju, D; Dash, S; Tyagi, A K; Raj, Baldev 2007-01-01 We report here that atomic force microscopy (AFM) in frictional force mode can be used to detect the onset of chain scission and crosslinking in polymeric and macromolecular samples upon irradiation. A systematic investigation to detect chain scission and crosslinking of two elastomers (1) ethylene-propylene-diene monomer rubber and (2) fluorocarbon rubber, upon γ-ray irradiation has been carried out using frictional force microscopy (FFM). From the AFM results we observed that both the elastomers show a systematic smoothening of its surfaces, as the γ-ray dose rate increases. However, the frictional property studied using FFM of the sample surfaces show an initial increase and then a decrease as a function of dose rate. This behaviour of increase in its frictional property has been attributed to the onset of chain scission, and the subsequent decrease in friction has been attributed to the onset of crosslinking of the polymer chains. The evaluated qualitative and semi-quantitative changes observed in the overall frictional property as a function of the γ-ray dose rate for the two elastomers are presented in this paper 8. In Situ Investigation of Electrochemically Mediated Surface-Initiated Atom Transfer Radical Polymerization by Electrochemical Surface Plasmon Resonance. Science.gov (United States) Chen, Daqun; Hu, Weihua 2017-04-18 Electrochemically mediated atom transfer radical polymerization (eATRP) initiates/controls the controlled/living ATRP chain propagation process by electrochemically generating (regenerating) the activator (lower-oxidation-state metal complex) from deactivator (higher-oxidation-state metal complex). Despite successful demonstrations in both of the homogeneous polymerization and heterogeneous systems (namely, surface-initiated ATRP, SI-ATRP), the eATRP process itself has never been in situ investigated, and important information regarding this process remains unrevealed. In this work, we report the first investigation of the electrochemically mediated SI-ATRP (eSI-ATRP) by rationally combining the electrochemical technique with real-time surface plasmon resonance (SPR). In the experiment, the potential of a SPR gold chip modified by the self-assembled monolayer of the ATRP initiator was controlled to electrochemically reduce the deactivator to activator to initiate the SI-ATRP, and the whole process was simultaneously monitored by SPR with a high time resolution of 0.1 s. It is found that it is feasible to electrochemically trigger/control the SI-ATRP and the polymerization rate is correlated to the potential applied to the gold chip. This work reveals important kinetic information for eSI-ATRP and offers a powerful platform for in situ investigation of such complicated processes. 9. Formation of InN atomic-size wires by simple N adsorption on the In/Si(111)–(4 × 1) surface International Nuclear Information System (INIS) Guerrero-Sánchez, J.; Takeuchi, Noboru 2016-01-01 Highlights: • N atoms on the surface form bonds with two In atoms and one Si atom. • Surface formation energy calculations show two stable structures with formation of InN atomic-size wires. • Projected density of states shows a tendency to form In−N and Si−N bonds on the surface. • Charge density corroborates the covalent character of the In−N bonds. - Abstract: We have carried out first principles total energy calculations to study the formation of InN atomic-size wires on the In/Si(111)–(4 × 1) surface. In its most favorable adsorption site, a single N atom forms InN arrangements. The deposit of 0.25 monolayers (MLs) of N atoms, result in the breaking of one of the original In chains and the formation of an InN atomic size wire. Increasing the coverage up to 0.5 ML of N atoms results in the formation of two of those wires. Calculated surface formation energies show that for N-poor conditions the most stable configuration is the original In/Si(111)–(4 × 1) surface with no N atoms. Increasing the N content, and in a reduced range of chemical potential, the formation of an InN wire is energetically favorable. Instead, from intermediate to N-rich conditions, two InN atomic wires are more stable. Projected density of states calculations have shown a trend to form covalent bonds between the In−p and N−p orbitals in these stable models. 10. Optimized Hypernetted-Chain Solutions for Helium -4 Surfaces and Metal Surfaces Science.gov (United States) Qian, Guo-Xin This thesis is a study of inhomogeneous Bose systems such as liquid ('4)He slabs and inhomogeneous Fermi systems such as the electron gas in metal films, at zero temperature. Using a Jastrow-type many-body wavefunction, the ground state energy is expressed by means of Bogoliubov-Born-Green-Kirkwood -Yvon and Hypernetted-Chain techniques. For Bose systems, Euler-Lagrange equations are derived for the one- and two -body functions and systematic approximation methods are physically motivated. It is shown that the optimized variational method includes a self-consistent summation of ladder- and ring-diagrams of conventional many-body theory. For Fermi systems, a linear potential model is adopted to generate the optimized Hartree-Fock basis. Euler-Lagrange equations are derived for the two-body correlations which serve to screen the strong bare Coulomb interaction. The optimization of the pair correlation leads to an expression of correlation energy in which the state averaged RPA part is separated. Numerical applications are presented for the density profile and pair distribution function for both ('4)He surfaces and metal surfaces. Both the bulk and surface energies are calculated in good agreement with experiments. 11. Behavior of Rydberg atoms at surfaces: energy level shifts and ionization Energy Technology Data Exchange (ETDEWEB) Dunning, F.B. E-mail: [email protected]; Dunham, H.R.; Oubre, C.; Nordlander, P 2003-04-01 The ionization of xenon atoms excited to the extreme red and blue states in high-lying Xe(n) Stark manifolds at a metal surface is investigated. The data show that, despite their very different initial spatial characteristics, the extreme members of a given Stark manifold ionize at similar atom/surface separations. This is explained, with the aid of complex scaling calculations, in terms of the strong perturbations in the energies and structure of the atomic states induced by the presence of the surface which lead to avoided crossings between neighboring levels as the surface is approached. 12. Behavior of Rydberg atoms at surfaces: energy level shifts and ionization CERN Document Server Dunning, F B; Oubre, C D; Nordlander, P 2003-01-01 The ionization of xenon atoms excited to the extreme red and blue states in high-lying Xe(n) Stark manifolds at a metal surface is investigated. The data show that, despite their very different initial spatial characteristics, the extreme members of a given Stark manifold ionize at similar atom/surface separations. This is explained, with the aid of complex scaling calculations, in terms of the strong perturbations in the energies and structure of the atomic states induced by the presence of the surface which lead to avoided crossings between neighboring levels as the surface is approached. 13. Atoms International Nuclear Information System (INIS) Fuchs, Alain; Villani, Cedric; Guthleben, Denis; Leduc, Michele; Brenner, Anastasios; Pouthas, Joel; Perrin, Jean 2014-01-01 Completed by recent contributions on various topics (atoms and the Brownian motion, the career of Jean Perrin, the evolution of atomic physics since Jean Perrin, relationship between scientific atomism and philosophical atomism), this book is a reprint of a book published at the beginning of the twentieth century in which the author addressed the relationship between atomic theory and chemistry (molecules, atoms, the Avogadro hypothesis, molecule structures, solutes, upper limits of molecular quantities), molecular agitation (molecule velocity, molecule rotation or vibration, molecular free range), the Brownian motion and emulsions (history and general features, statistical equilibrium of emulsions), the laws of the Brownian motion (Einstein's theory, experimental control), fluctuations (the theory of Smoluchowski), light and quanta (black body, extension of quantum theory), the electricity atom, the atom genesis and destruction (transmutations, atom counting) 14. Direct synthesis of sp-bonded carbon chains on graphite surface by femtosecond laser irradiation International Nuclear Information System (INIS) Hu, A.; Rybachuk, M.; Lu, Q.-B.; Duley, W. W. 2007-01-01 Microscopic phase transformation from graphite to sp-bonded carbon chains (carbyne) and nanodiamond has been induced by femtosecond laser pulses on graphite surface. UV/surface enhanced Raman scattering spectra and x-ray photoelectron spectra displayed the local synthesis of carbyne in the melt zone while nanocrystalline diamond and trans-polyacetylene chains form in the edge area of gentle ablation. These results evidence possible direct 'writing' of variable chemical bonded carbons by femtosecond laser pulses for carbon-based applications 15. Stretching of a polymer chain anchored to a surface: the massive field theory approach International Nuclear Information System (INIS) Usatenko, Zoryana 2014-01-01 Taking into account the well-known correspondence between the field theoretical φ 4 O(n)-vector model in the limit n → 0 and the behaviour of long-flexible polymer chains, the investigation of stretching of an ideal and a real polymer chain with excluded volume interactions in a good solvent anchored to repulsive and inert surfaces is performed. The calculations of the average stretching force which arises when the free end of a polymer chain moves away from a repulsive or inert surface are performed up to one-loop order of the massive field theory approach in fixed space dimensions d = 3. The analysis of the obtained results indicates that the average stretching force for a real polymer chain anchored to a repulsive surface demonstrates different behaviour for the cases z-tilde ≪1 and z-tilde ≫1, where z-tilde =z ′ /R z . Besides, the results obtained in the framework of the massive field theory approach are in good agreement with previous theoretical results for an ideal polymer chain and results of a density functional theory approach for the region of small applied forces when deformation of a polymer chain in the direction of the applied force is not bigger than the linear extension of a polymer chain in this direction. The better agreement between these two methods is observed in the case where the number of monomers increases and the polymer chain becomes longer. (paper) 16. Trapping and stabilization of hydrogen atoms in intracrystalline voids. Defected calcium fluorides and Y zeolite surfaces International Nuclear Information System (INIS) Iton, L.E.; Turkevich, J. 1978-01-01 Using EPR spectroscopy, it has been established that H. atoms are absorbed from the gas phase when CaF 2 powder is exposed to H 2 gas in which a microwave discharge is sustained, being trapped in sites that provide unusual thermal stability. The disposition of the trapped atoms is determined by the occluded water content of the CaF 2 . For ultrapure CaF 2 , atoms are trapped in interstitial sites having A 0 = 1463 MHz; for increasing water content, two types of trapped H. atoms are discriminated, with preferential trapping in void sites (external to the regular fluorite lattice) that are associated with the H 2 O impurity. Characterization of these ''extra-lattice'' H. (and D.) atoms is presented, and their EPR parameters and behavior are discussed in detail. Failure to effect H.-D. atom exchange with D 2 gas suggests that atoms are not stabilized on the CaF 2 surface. H. atoms are trapped exclusively in ''extra-lattice'' sites when the water-containing CaF 2 is γ irradiated at 77 or 298 K indicating that the scission product atoms do not escape from the precursor void region into the regular lattice. It is concluded that the thermal stability of the ''extra-lattice'' atoms, like that of the interstitial atoms, is determined ultimately by the high activation energy for diffusion of the H. atom through the CaF 2 lattice. For comparison, results obtained from H. atoms trapped in γ-irradiated rare earth ion-exchanged Y zeolites are presented and discussed also; these ''surface'' trapped atoms do not exhibit great thermalstability. Distinctions in the H. atom formation mechanisms between the fluorides and the zeolites were deduced from the accompanying paramagnetic species formed. The intracavity electric fields in the Y zeolites have been estimated from the H. atoms hfsc contractions, and are found to be very high, about 1 V/A 17. Van der Waals enhancement of optical atom potentials via resonant coupling to surface polaritons. Science.gov (United States) Kerckhoff, Joseph; Mabuchi, Hideo 2009-08-17 Contemporary experiments in cavity quantum electrodynamics (cavity QED) with gas-phase neutral atoms rely increasingly on laser cooling and optical, magneto-optical or magnetostatic trapping methods to provide atomic localization with sub-micron uncertainty. Difficult to achieve in free space, this goal is further frustrated by atom-surface interactions if the desired atomic placement approaches within several hundred nanometers of a solid surface, as can be the case in setups incorporating monolithic dielectric optical resonators such as microspheres, microtoroids, microdisks or photonic crystal defect cavities. Typically in such scenarios, the smallest atom-surface separation at which the van der Waals interaction can be neglected is taken to be the optimal localization point for associated trapping schemes, but this sort of conservative strategy generally compromises the achievable cavity QED coupling strength. Here we suggest a new approach to the design of optical dipole traps for atom confinement near surfaces that exploits strong surface interactions, rather than avoiding them, and present the results of a numerical study based on (39)K atoms and indium tin oxide (ITO). Our theoretical framework points to the possibility of utilizing nanopatterning methods to engineer novel modifications of atom-surface interactions. (c) 2009 Optical Society of America 18. Ballistic Evaporation and Solvation of Helium Atoms at the Surfaces of Protic and Hydrocarbon Liquids. Science.gov (United States) Johnson, Alexis M; Lancaster, Diane K; Faust, Jennifer A; Hahn, Christine; Reznickova, Anna; Nathanson, Gilbert M 2014-11-06 Atomic and molecular solutes evaporate and dissolve by traversing an atomically thin boundary separating liquid and gas. Most solutes spend only short times in this interfacial region, making them difficult to observe. Experiments that monitor the velocities of evaporating species, however, can capture their final interactions with surface solvent molecules. We find that polarizable gases such as N2 and Ar evaporate from protic and hydrocarbon liquids with Maxwell-Boltzmann speed distributions. Surprisingly, the weakly interacting helium atom emerges from these liquids at high kinetic energies, exceeding the expected energy of evaporation from salty water by 70%. This super-Maxwellian evaporation implies in reverse that He atoms preferentially dissolve when they strike the surface at high energies, as if ballistically penetrating into the solvent. The evaporation energies increase with solvent surface tension, suggesting that He atoms require extra kinetic energy to navigate increasingly tortuous paths between surface molecules. 19. Ionization of xenon Rydberg atoms at Si(1 0 0) surfaces Energy Technology Data Exchange (ETDEWEB) Dunham, H.R. [Department of Physics and Astronomy, Rice University MS-61, 6100 Main Street, Houston, TX 77005-1892 (United States); Wethekam, S. [Institut fuer Physik der Humboldt-Universitaet zu Berlin, Newtonstra. 15, D-12489, Berlin (Germany); Lancaster, J.C. [Department of Physics and Astronomy, Rice University MS-61, 6100 Main Street, Houston, TX 77005-1892 (United States); Dunning, F.B. [Department of Physics and Astronomy, Rice University MS-61, 6100 Main Street, Houston, TX 77005-1892 (United States)]. E-mail: [email protected] 2007-03-15 The ionization of xenon Rydberg atoms excited to the lowest states in the n = 17 and n = 20 Stark manifolds at Si(1 0 0) surfaces is investigated. It is shown that, under appropriate conditions, a sizable fraction of the incident atoms can be detected as ions. Although the onset in the ion signal is perturbed by stray fields present at the surface, the data are consistent with ionization rates similar to those measured earlier at metal surfaces. 20. Atomic spin-chain realization of a model for quantum criticality NARCIS (Netherlands) Toskovic, R.; van den Berg, R.; Spinelli, A.; Eliens, I.S.; van den Toorn, B.; Bryant, B.; Caux, J.-S.; Otte, A.F. The ability to manipulate single atoms has opened up the door to constructing interesting and useful quantum structures from the ground up. On the one hand, nanoscale arrangements of magnetic atoms are at the heart of future quantum computing and spintronic devices; on the other hand, they can be 1. A density functional theory study on the carbon chain growth of ethanol formation on Cu-Co (111) and (211) surfaces Energy Technology Data Exchange (ETDEWEB) Ren, Bohua; Dong, Xiuqin; Yu, Yingzhe [Key Laboratory for Green Chemical Technology of Ministry of Education, R& D Center for Petrochemical Technology, Tianjin University, Tianjin 300072 (China); Collaborative Innovation Center of Chemical Science and Engineering (Tianjin), Tianjin 300072 (China); Wen, Guobin [Collaborative Innovation Center of Chemical Science and Engineering (Tianjin), Tianjin 300072 (China); Zhang, Minhua, E-mail: [email protected] [Key Laboratory for Green Chemical Technology of Ministry of Education, R& D Center for Petrochemical Technology, Tianjin University, Tianjin 300072 (China); Collaborative Innovation Center of Chemical Science and Engineering (Tianjin), Tianjin 300072 (China) 2017-08-01 Highlights: • Calculations based on the first-principle density functional theory were carried out to study ethanol formation from syngas on Cu-Co surfaces. • The most controversial reactions in ethanol formation from syngas were researched: CO dissociation mechanism and the key reactions of carbon chain growth of ethanol formation (HCO insertion reactions (CHx + HCO → CHxCHO (x = 1–3))). • Four model surfaces (Cu-Co (111) and (211) with Cu-rich or Co-rich surfaces) were built to investigate the synergy of the Cu and Co components. • The PDOS of 4d orbitals and d-band center analysis of surface Cu and Co atoms of all surfaces were studied to reveal correlation between electronic property and catalytic performance. - Abstract: Calculations based on the first-principle density functional theory were carried out to study the most controversial reactions in ethanol formation from syngas on Cu-Co surfaces: CO dissociation mechanism and the key reactions of carbon chain growth of ethanol formation (HCO insertion reactions) on four model surfaces (Cu-Co (111) and (211) with Cu-rich or Co-rich surfaces) to investigate the synergy of the Cu and Co components since the complete reaction network of ethanol formation from syngas is a huge computational burden to calculate on four Cu-Co surface models. We investigated adsorption of important species involved in these reactions, activation barrier and reaction energy of H-assisted dissociation mechanism, directly dissociation of CO, and HCO insertion reactions (CH{sub x} + HCO → CH{sub x}CHO (x = 1–3)) on four Cu-Co surface models. It was found that reactions on Cu-rich (111) and (211) surfaces all have lower activation barrier in H-assisted dissociation and HCO insertion reactions, especially CH + HCO → CHCHO reaction. The PDOS of 4d orbitals of surface Cu and Co atoms of all surfaces were studied. Analysis of d-band center of Cu and Co atoms and the activation barrier data suggested the correlation between 2. Surface atomic relaxation and magnetism on hydrogen-adsorbed Fe(110) surfaces from first principles Energy Technology Data Exchange (ETDEWEB) Chohan, Urslaan K.; Jimenez-Melero, Enrique [School of Materials, The University of Manchester, Manchester M13 9PL (United Kingdom); Dalton Cumbrian Facility, The University of Manchester, Moor Row CA24 3HA (United Kingdom); Koehler, Sven P.K., E-mail: [email protected] [Dalton Cumbrian Facility, The University of Manchester, Moor Row CA24 3HA (United Kingdom); School of Chemistry, The University of Manchester, Manchester M13 9PL (United Kingdom); Photon Science Institute, The University of Manchester, Manchester M13 9PL (United Kingdom) 2016-11-30 Highlights: • Potential energy surfaces for H diffusion on Fe(110) calculated. • Full vibrational analysis of surface modes performed. • Vibrational analysis establishes lb site as a transition state to the 3f site. • Pronounced buckling observed in the Fe surface layer. - Abstract: We have computed adsorption energies, vibrational frequencies, surface relaxation and buckling for hydrogen adsorbed on a body-centred-cubic Fe(110) surface as a function of the degree of H coverage. This adsorption system is important in a variety of technological processes such as the hydrogen embrittlement in ferritic steels, which motivated this work, and the Haber–Bosch process. We employed spin-polarised density functional theory to optimise geometries of a six-layer Fe slab, followed by frozen mode finite displacement phonon calculations to compute Fe–H vibrational frequencies. We have found that the quasi-threefold (3f) site is the most stable adsorption site, with adsorption energies of ∼3.0 eV/H for all coverages studied. The long-bridge (lb) site, which is close in energy to the 3f site, is actually a transition state leading to the stable 3f site. The calculated harmonic vibrational frequencies collectively span from 730 to 1220 cm{sup −1}, for a range of coverages. The increased first-to-second layer spacing in the presence of adsorbed hydrogen, and the pronounced buckling observed in the Fe surface layer, may facilitate the diffusion of hydrogen atoms into the bulk, and therefore impact the early stages of hydrogen embrittlement in steels. 3. Mechanical torques generated by optically pumped atomic spin relaxation at surfaces International Nuclear Information System (INIS) Herman, R.M. 1982-01-01 It is argued that a valuable method of observing certain types of surface-atom interactions may lie in mechanical torques generated through the spin-orbit relaxation of valence electronic spins of optically pumped atoms at surfaces. The unusual feature of this phenomenon is that the less probable spin-orbit relaxation becomes highly visible as compared with the much more rapid paramagnetic relaxation, because of an enhancement, typically by as much as a factor 10 9 , in the torques delivered to mechanical structures, by virtue of a very large effective moment arm. Spin-orbit relaxation operates through an exchange of translational momentum which, in turn, can be identified with the delivery of a gigantic angular momentum (in units of h) relative to a distant axis about which mechanical motion is referred. The spin-orbit relaxation strongly depends upon the atomic number of the surface atoms and the strength of interaction with the optically pumped atoms. Being dominated by high-atomic-number surface atoms, spin-orbit relaxation rates may not be too strongly influenced by minor surface contamination of lighter-weight optically active atoms 4. Mechanical torques generated by optically pumped atomic spin relaxation at surfaces Science.gov (United States) Herman, R. M. 1982-03-01 It is argued that a valuable method of observing certain types of surface-atom interactions may lie in mechanical torques generated through the spin-orbit relaxation of valence electronic spins of optically pumped atoms at surfaces. The unusual feature of this phenomenon is that the less probable spin-orbit relaxation becomes highly visible as compared with the much more rapid paramagnetic relaxation, because of an enhancement, typically by as much as a factor 109, in the torques delivered to mechanical structures, by virtue of a very large effective moment arm. Spin-orbit relaxation operates through an exchange of translational momentum which, in turn, can be identified with the delivery of a gigantic angular momentum (in units of ℏ) relative to a distant axis about which mechanical motion is referred. The spin-orbit relaxation strongly depends upon the atomic number of the surface atoms and the strength of interaction with the optically pumped atoms. Being dominated by high-atomic-number surface atoms, spin-orbit-relaxation rates may not be too strongly influenced by minor surface contamination of lighter-weight optically active atoms. 5. AFM and SFG studies of pHEMA-based hydrogel contact lens surfaces in saline solution: adhesion, friction, and the presence of non-crosslinked polymer chains at the surface. Science.gov (United States) Kim, Seong Han; Opdahl, Aric; Marmo, Chris; Somorjai, Gabor A 2002-04-01 The surfaces of two types of soft contact lenses neutral and ionic hydrogels--were characterized by atomic force microscopy (AFM) and sum-frequency-generation (SFG) vibrational spectroscopy. AFM measurements in saline solution showed that the presence of ionic functional groups at the surface lowered the friction and adhesion to a hydrophobic polystyrene tip. This was attributed to the specific interactions of water and the molecular orientation of hydrogel chains at the surface. Friction and adhesion behavior also revealed the presence of domains of non-crosslinked polymer chains at the lens surface. SFG showed that the lens surface became partially dehydrated upon exposure to air. On this partially dehydrated lens surface, the non-crosslinked domains exhibited low friction and adhesion in AFM. Fully hydrated in saline solution, the non-crosslinked domains extended more than tens of nanometers into solution and were mobile. 6. Growth mechanism and surface atomic structure of AgInSe{sub 2} Energy Technology Data Exchange (ETDEWEB) Pena Martin, Pamela; Rockett, Angus A.; Lyding, Joseph [Department of Materials Science and Engineering, University of Illinois at Urbana-Champaign, 1304 W. Green St., Urbana, Illinois 61801 (United States); Department of Electrical and Computer Engineering and the Beckman Institute, University of Illinois at Urbana-Champaign, 405 N. Matthews St., Urbana, Illinois 61801 (United States) 2012-07-15 The growth of (112)A-oriented AgInSe{sub 2} on GaAs (111)A and its surface reconstruction were studied by scanning tunneling microscopy, atomic force microscopy, and other techniques. Films were grown by a sputtering and evaporation method. Topographic STM images reveal that the film grew by atomic incorporation into surface steps resulting from screw dislocations on the surface. The screw dislocation density was {approx}10{sup 10} cm{sup 2}. Atomically resolved images also show that the surface atomic arrangement appears to be similar to that of the bulk, with a spacing of 0.35-0.41 nm. There is no observable reconstruction, which is unexpected for a polar semiconductor surface. 7. Quantum interference in grazing scattering of swift He atoms from LiF(0 0 1) surfaces: Surface eikonal approximation Energy Technology Data Exchange (ETDEWEB) Gravielle, M.S. [Instituto de Astronomia y Fisica del Espacio, CONICET, Casilla de Correo 67, Sucursal 28, 1428 Buenos Aires (Argentina); Dpto. de Fisica, FCEN, Universidad de Buenos Aires, Buenos Aires (Argentina)], E-mail: [email protected]; Miraglia, J.E. [Instituto de Astronomia y Fisica del Espacio, CONICET, Casilla de Correo 67, Sucursal 28, 1428 Buenos Aires (Argentina); Dpto. de Fisica, FCEN, Universidad de Buenos Aires, Buenos Aires (Argentina) 2009-02-15 This work deals with the interference effects recently observed in grazing collisions of few-keV atoms with insulator surfaces. The process is studied within a distorted-wave method, the surface eikonal approximation, based on the use of the eikonal wave function and involving axial channeled trajectories with different initial conditions. The theory is applied to helium atoms impinging on a LiF(0 0 1) surface along the <1 1 0> direction. The role played by the projectile polarization and the surface rumpling is investigated. We found that when both effects are included, the proposed eikonal approach provides angular projectile spectra in good agreement with the experimental findings. 8. Quantum interference in grazing scattering of swift He atoms from LiF(0 0 1) surfaces: Surface eikonal approximation International Nuclear Information System (INIS) Gravielle, M.S.; Miraglia, J.E. 2009-01-01 This work deals with the interference effects recently observed in grazing collisions of few-keV atoms with insulator surfaces. The process is studied within a distorted-wave method, the surface eikonal approximation, based on the use of the eikonal wave function and involving axial channeled trajectories with different initial conditions. The theory is applied to helium atoms impinging on a LiF(0 0 1) surface along the direction. The role played by the projectile polarization and the surface rumpling is investigated. We found that when both effects are included, the proposed eikonal approach provides angular projectile spectra in good agreement with the experimental findings. 9. Concentration and saturation effects of tethered polymer chains on adsorbing surfaces Science.gov (United States) Descas, Radu; Sommer, Jens-Uwe; Blumen, Alexander 2006-12-01 We consider end-grafted chains at an adsorbing surface under good solvent conditions using Monte Carlo simulations and scaling arguments. Grafting of chains allows us to fix the surface concentration and to study a wide range of surface concentrations from the undersaturated state of the surface up to the brushlike regime. The average extension of single chains in the direction parallel and perpendicular to the surface is analyzed using scaling arguments for the two-dimensional semidilute surface state according to Bouchaud and Daoud [J. Phys. (Paris) 48, 1991 (1987)]. We find good agreement with the scaling predictions for the scaling in the direction parallel to the surface and for surface concentrations much below the saturation concentration (dense packing of adsorption blobs). Increasing the grafting density we study the saturation effects and the oversaturation of the adsorption layer. In order to account for the effect of excluded volume on the adsorption free energy we introduce a new scaling variable related with the saturation concentration of the adsorption layer (saturation scaling). We show that the decrease of the single chain order parameter (the fraction of adsorbed monomers on the surface) with increasing concentration, being constant in the ideal semidilute surface state, is properly described by saturation scaling only. Furthermore, the simulation results for the chains' extension from higher surface concentrations up to the oversaturated state support the new scaling approach. The oversaturated state can be understood using a geometrical model which assumes a brushlike layer on top of a saturated adsorption layer. We provide evidence that adsorbed polymer layers are very sensitive to saturation effects, which start to influence the semidilute surface scaling even much below the saturation threshold. 10. Atomic and electronic structures of novel silicon surface structures Energy Technology Data Exchange (ETDEWEB) Terry, J.H. Jr. 1997-03-01 The modification of silicon surfaces is presently of great interest to the semiconductor device community. Three distinct areas are the subject of inquiry: first, modification of the silicon electronic structure; second, passivation of the silicon surface; and third, functionalization of the silicon surface. It is believed that surface modification of these types will lead to useful electronic devices by pairing these modified surfaces with traditional silicon device technology. Therefore, silicon wafers with modified electronic structure (light-emitting porous silicon), passivated surfaces (H-Si(111), Cl-Si(111), Alkyl-Si(111)), and functionalized surfaces (Alkyl-Si(111)) have been studied in order to determine the fundamental properties of surface geometry and electronic structure using synchrotron radiation-based techniques. 11. A density functional theory study on the carbon chain growth of ethanol formation on Cu-Co (111) and (211) surfaces Science.gov (United States) Ren, Bohua; Dong, Xiuqin; Yu, Yingzhe; Wen, Guobin; Zhang, Minhua 2017-08-01 Calculations based on the first-principle density functional theory were carried out to study the most controversial reactions in ethanol formation from syngas on Cu-Co surfaces: CO dissociation mechanism and the key reactions of carbon chain growth of ethanol formation (HCO insertion reactions) on four model surfaces (Cu-Co (111) and (211) with Cu-rich or Co-rich surfaces) to investigate the synergy of the Cu and Co components since the complete reaction network of ethanol formation from syngas is a huge computational burden to calculate on four Cu-Co surface models. We investigated adsorption of important species involved in these reactions, activation barrier and reaction energy of H-assisted dissociation mechanism, directly dissociation of CO, and HCO insertion reactions (CHx + HCO → CHxCHO (x = 1-3)) on four Cu-Co surface models. It was found that reactions on Cu-rich (111) and (211) surfaces all have lower activation barrier in H-assisted dissociation and HCO insertion reactions, especially CH + HCO → CHCHO reaction. The PDOS of 4d orbitals of surface Cu and Co atoms of all surfaces were studied. Analysis of d-band center of Cu and Co atoms and the activation barrier data suggested the correlation between electronic property and catalytic performance. Cu-Co bimetallic with Cu-rich surface allows Co to have higher catalytic activity through the interaction of Cu and Co atom. Then it will improve the adsorption of CO and catalytic activity of Co. Thus it is more favorable to the carbon chain growth in ethanol formation. Our study revealed the factors influencing the carbon chain growth in ethanol production and explained the internal mechanism from electronic property aspect. 12. The calculation of surface free energy based on embedded atom method for solid nickel International Nuclear Information System (INIS) Luo Wenhua; Hu Wangyu; Su Kalin; Liu Fusheng 2013-01-01 Highlights: ► A new solution for accurate prediction of surface free energy based on embedded atom method was proposed. ► The temperature dependent anisotropic surface energy of solid nickel was obtained. ► In isotropic environment, the approach does not change most predictions of bulk material properties. - Abstract: Accurate prediction of surface free energy of crystalline metals is a challenging task. The theory calculations based on embedded atom method potentials often underestimate surface free energy of metals. With an analytical charge density correction to the argument of the embedding energy of embedded atom method, an approach to improve the prediction for surface free energy is presented. This approach is applied to calculate the temperature dependent anisotropic surface energy of bulk nickel and surface energies of nickel nanoparticles, and the obtained results are in good agreement with available experimental data. 13. Surface atomic relaxation and magnetism on hydrogen-adsorbed Fe(110) surfaces from first principles Science.gov (United States) Chohan, Urslaan K.; Jimenez-Melero, Enrique; Koehler, Sven P. K. 2016-11-01 We have computed adsorption energies, vibrational frequencies, surface relaxation and buckling for hydrogen adsorbed on a body-centred-cubic Fe(110) surface as a function of the degree of H coverage. This adsorption system is important in a variety of technological processes such as the hydrogen embrittlement in ferritic steels, which motivated this work, and the Haber-Bosch process. We employed spin-polarised density functional theory to optimise geometries of a six-layer Fe slab, followed by frozen mode finite displacement phonon calculations to compute Fe-H vibrational frequencies. We have found that the quasi-threefold (3f) site is the most stable adsorption site, with adsorption energies of ∼3.0 eV/H for all coverages studied. The long-bridge (lb) site, which is close in energy to the 3f site, is actually a transition state leading to the stable 3f site. The calculated harmonic vibrational frequencies collectively span from 730 to 1220 cm-1, for a range of coverages. The increased first-to-second layer spacing in the presence of adsorbed hydrogen, and the pronounced buckling observed in the Fe surface layer, may facilitate the diffusion of hydrogen atoms into the bulk, and therefore impact the early stages of hydrogen embrittlement in steels. 14. Surface phonon modes of the NaI(001) crystal surface by inelastic He atom scattering International Nuclear Information System (INIS) Brug, W.P.; Chern, G.; Duan, J.; Safron, S.A.; Skofronick, J.G.; Benedek, G. 1990-01-01 The present theoretical treatment of the surface dynamics of ionic insulators employs the shell model with parameters obtained from bulk materials. The approach has been generally very successful in comparisons with experiment. However, most of the experimental surface dynamics work has been on the low-mass alkali halides with very little effort on higher energy modes or on the heavier alkali halides, where effects from relaxation might be important. The work of this paper explores these latter two conditions. Inelastic scattering of He atoms from the left-angle 110 right-angle NaI(001) surface has been used to obtain the acoustic S 1 Rayleigh mode, the S 6 longitudinal mode, and the S 8 crossing mode, however, no gap S 4 optical mode was seen. The results compare favorably with reported theoretical models employing both slab calculations and the Green's function method thus indicating that bulk parameters and the shell model go a long way in explaining most of the observations 15. He atom surface spectroscopy: Surface lattice dynamics of insulators, metals and metal overlayers International Nuclear Information System (INIS) 1990-01-01 During the first three years of this grant (1985--1988) the effort was devoted to the construction of a state-of-the-art He atom scattering (HAS) instrument which would be capable of determining the structure and dynamics of metallic, semiconductor or insulator crystal surfaces. The second three year grant period (1988--1991) has been dedicated to measurements. The construction of the instrument went better than proposed; it was within budget, finished in the proposed time and of better sensitivity and resolution than originally planned. The same success has been carried over to the measurement phase where the concentration has been on studies of insulator surfaces, as discussed in this paper. The experiments of the past three years have focused primarily on the alkali halides with a more recent shift to metal oxide crystal surfaces. Both elastic and inelastic scattering experiments were carried out on LiF, NaI, NaCl, RbCl, KBr, RbBr, RbI, CsF, CsI and with some preliminary work on NiO and MgO 16. An Analytical Model for Adsorption and Diffusion of Atoms/Ions on Graphene Surface Directory of Open Access Journals (Sweden) Yan-Zi Yu 2015-01-01 Full Text Available Theoretical investigations are made on adsorption and diffusion of atoms/ions on graphene surface based on an analytical continuous model. An atom/ion interacts with every carbon atom of graphene through a pairwise potential which can be approximated by the Lennard-Jones (L-J potential. Using the Fourier expansion of the interaction potential, the total interaction energy between the adsorption atom/ion and a monolayer graphene is derived. The energy-distance relationships in the normal and lateral directions for varied atoms/ions, including gold atom (Au, platinum atom (Pt, manganese ion (Mn2+, sodium ion (Na1+, and lithium-ion (Li1+, on monolayer graphene surface are analyzed. The equilibrium position and binding energy of the atoms/ions at three particular adsorption sites (hollow, bridge, and top are calculated, and the adsorption stability is discussed. The results show that H-site is the most stable adsorption site, which is in agreement with the results of other literatures. What is more, the periodic interaction energy and interaction forces of lithium-ion diffusing along specific paths on graphene surface are also obtained and analyzed. The minimum energy barrier for diffusion is calculated. The possible applications of present study include drug delivery system (DDS, atomic scale friction, rechargeable lithium-ion graphene battery, and energy storage in carbon materials. 17. Noncontact AFM Imaging of Atomic Defects on the Rutile TiO2 (110) Surface DEFF Research Database (Denmark) Lauritsen, Jeppe Vang 2015-01-01 The atomic force microscope (AFM) operated in the noncontact mode (nc-AFM) offers a unique tool for real space, atomic-scale characterisation of point defects and molecules on surfaces, irrespective of the substrate being electrically conducting or non-conducting. The nc-AFM has therefore in rece... 18. Alternative types of molecule-decorated atomic chains in Au–CO–Au single-molecule junctions Directory of Open Access Journals (Sweden) Zoltán Balogh 2015-06-01 Full Text Available We investigate the formation and evolution of Au–CO single-molecule break junctions. The conductance histogram exhibits two distinct molecular configurations, which are further investigated by a combined statistical analysis. According to conditional histogram and correlation analysis these molecular configurations show strong anticorrelations with each other and with pure Au monoatomic junctions and atomic chains. We identify molecular precursor configurations with somewhat higher conductance, which are formed prior to single-molecule junctions. According to detailed length analysis two distinct types of molecule-affected chain-formation processes are observed, and we compare these results to former theoretical calculations considering bridge- and atop-type molecular configurations where the latter has reduced conductance due to destructive Fano interference. 19. Surface-initiated Atom Transfer Radical Polymerization - a Technique to Develop Biofunctional Coatings DEFF Research Database (Denmark) Fristrup, Charlotte Juel; Jankova Atanasova, Katja; Hvilsted, Søren 2009-01-01 The initial formation of initiating sites for atom transfer radical polymerization (ATRP) on various polymer surfaces and numerous inorganic and metallic surfaces is elaborated. The subsequent ATRP grafting of a multitude of monomers from such surfaces to generate thin covalently linked polymer... 20. Atomic species recognition on oxide surfaces using low temperature scanning probe microscopy Energy Technology Data Exchange (ETDEWEB) Ma, Zong Min, E-mail: [email protected] [National Key Laboratory for Electronic Measurement Technology, North University of China, Taiyuan, 030051 (China); Key Laboratory of Instrumentation Science & Dynamic Measurement, North University of China, Ministry of Education, Taiyuan, 030051 (China); School of Instrument and Electronics, North University of China, Taiyuan, 030051 (China); Shi, Yun Bo; Mu, Ji Liang; Qu, Zhang; Zhang, Xiao Ming; Qin, Li [National Key Laboratory for Electronic Measurement Technology, North University of China, Taiyuan, 030051 (China); Key Laboratory of Instrumentation Science & Dynamic Measurement, North University of China, Ministry of Education, Taiyuan, 030051 (China); School of Instrument and Electronics, North University of China, Taiyuan, 030051 (China); Liu, Jun, E-mail: [email protected] [National Key Laboratory for Electronic Measurement Technology, North University of China, Taiyuan, 030051 (China); Key Laboratory of Instrumentation Science & Dynamic Measurement, North University of China, Ministry of Education, Taiyuan, 030051 (China); School of Instrument and Electronics, North University of China, Taiyuan, 030051 (China) 2017-02-01 Highlights: • The coexisted phase of p(2 × 1)and c(6 × 2) on Cu(110)-O surface using AFM under UHV at low temperature. • Two different c(6 × 2) phase depending on the status of the tip apex. • Electronic state of tip seriously effect the resolution and stability of the sample surface. - Abstract: In scanning probe microscopy (SPM), the chemical properties and sharpness of the tips of the cantilever greatly influence the scanning of a sample surface. Variation in the chemical properties of the sharp tip apex can induce transformation of the SPM images. In this research, we explore the relationship between the tip and the structure of a sample surface using dynamic atomic force microscopy (AFM) on a Cu(110)-O surface under ultra-high vacuum (UHV) at low temperature (78 K). We observed two different c(6 × 2) phase types in which super-Cu atoms show as a bright spot when the tip apex is of O atoms and O atoms show as a bright spot when the tip apex is of Cu atoms. We also found that the electronic state of the tip has a serious effect on the resolution and stability of the sample surface, and provide an explanation for these phenomena. This technique can be used to identify atom species on sample surfaces, and represents an important development in the SPM technique. 1. Linear-chain model to explain density of states and Tsub(c) changes with atomic ordering International Nuclear Information System (INIS) Junod, A. 1978-01-01 The effect of long-range atomic order on the electronic density of states has been recalculated for the A15-type structure within the linear-chain model. It is found that a defect concentration c reduces the density of states at the Fermi level by a factor (1 + c/c 0 )(c/c 0 ) -3 [ln(1 + c/c 0 )] 3 . This result is in qualitative agreement with experimental data on the specific heat, magnetic susceptibility and superconducting transition temperature of V 3 Au. (author) 2. Surface features on Sahara soil dust particles made visible by atomic force microscope (AFM) phase images OpenAIRE G. Helas; M. O. Andreae 2008-01-01 We show that atomic force microscopy (AFM) phase images can reveal surface features of soil dust particles, which are not evident using other microscopic methods. The non-contact AFM method is able to resolve topographical structures in the nanometer range as well as to uncover repulsive atomic forces and attractive van der Waals' forces, and thus gives insight to surface properties. Though the method does not allow quantitative assignment in terms of chemical compound description, it clearly... 3. Refined potentials for rare gas atom adsorption on rare gas and alkali-halide surfaces Science.gov (United States) Wilson, J. W.; Heinbockel, J. H.; Outlaw, R. A. 1985-01-01 The utilization of models of interatomic potential for physical interaction to estimate the long range attractive potential for rare gases and ions is discussed. The long range attractive force is calculated in terms of the atomic dispersion properties. A data base of atomic dispersion parameters for rare gas atoms, alkali ion, and halogen ions is applied to the study of the repulsive core; the procedure for evaluating the repulsive core of ion interactions is described. The interaction of rare gas atoms on ideal rare gas solid and alkali-halide surfaces is analyzed; zero coverage absorption potentials are derived. 4. Interaction of K(nd) Rydberg atoms with an amorphous gold surface International Nuclear Information System (INIS) Gray, D.F. 1988-01-01 This thesis reports the first controlled study of the interactions of Rydberg atoms with a metal surface. In these experiments, a collimated beam of potassium Rydberg atoms is directed at a plane surface at near grazing incidence. Positive ions formed by surface ionization are attracted to the surface by their image charge, which is counterbalanced by an external electric field applied perpendicular to the surface. The ions are detected by a position-sensitive detector (PSD). At some critical value of the external field, the ion trajectories just miss the surface, suggesting that analysis of the dependence of the ion signals of external electric field can be used to determine the distance from the surface at which ionization occurs. This distance, and thus the corresponding critical electric field, is expected to be n-dependent. Experimentally, however, it was observed that the ion signal had a sudden n-independent onset when only a small positive perpendicular electric field was applied at the surface. This observation requires, surprisingly, that the ions produced by surface ionization can readily escape from the surface. The data do, however, show that Rydberg atoms are efficiently ionized in collisions with the surface. This process may provide a useful new detection technique for Rydberg atoms 5. Synthesis of thermoresponsive poly(N-isopropylacrylamide) brush on silicon wafer surface via atom transfer radical polymerization Energy Technology Data Exchange (ETDEWEB) Turan, Eylem; Demirci, Serkan [Department of Chemistry, Faculty of Art and Science, Gazi University, 06500 Besevler, Ankara (Turkey); Caykara, Tuncer, E-mail: [email protected] [Department of Chemistry, Faculty of Art and Science, Gazi University, 06500 Besevler, Ankara (Turkey) 2010-08-31 Thermoresponsive poly(N-isopropylacrylamide) [poly(NIPAM)] brush on silicon wafer surface was prepared by combining the self-assembled monolayer of initiator and atom transfer radical polymerization (ATRP). The resulting polymer brush was characterized by in situ reflectance Fourier transform infrared spectroscopy, atomic force microscopy and ellipsometry techniques. Gel permeation chromatography determination of the number-average molecular weight and polydispersity index of the brush detached from the silicon wafer surface suggested that the surface-initiated ATRP method can provide relatively homogeneous polymer brush. Contact angle measurements exhibited a two-stage increase upon heating over the board temperature range 25-45 {sup o}C, which is in contrast to the fact that free poly(NIPAM) homopolymer in aqueous solution exhibits a phase transition at ca. 34 {sup o}C within a narrow temperature range. The first de-wetting transition takes place at 27 {sup o}C, which can be tentatively attributed to the n-cluster induced collapse of the inner region of poly(NIPAM) brush close to the silicon surface; the second de-wetting transition occurs at 38 {sup o}C, which can be attributed to the outer region of poly(NIPAM) brush, possessing much lower chain density compared to that of the inner part. 6. Tunneling spectroscopy of a phosphorus impurity atom on the Ge(111)-(2 × 1) surface Energy Technology Data Exchange (ETDEWEB) Savinov, S. V.; Oreshkin, A. I., E-mail: [email protected], E-mail: [email protected] [Moscow State University (Russian Federation); Oreshkin, S. I. [Moscow State University, Sternberg Astronomical Institute (Russian Federation); Haesendonck, C. van [Laboratorium voor Stoffysica en Magnetisme (Belgium) 2015-06-15 We numerically model the Ge(111)-(2 × 1) surface electronic properties in the vicinity of a P donor impurity atom located near the surface. We find a notable increase in the surface local density of states (LDOS) around the surface dopant near the bottom of the empty surface state band π*, which we call a split state due to its limited spatial extent and energetic position inside the band gap. We show that despite the well-established bulk donor impurity energy level position at the very bottom of the conduction band, a surface donor impurity on the Ge(111)-(2 × 1) surface might produce an energy level below the Fermi energy, depending on the impurity atom local environment. It is demonstrated that the impurity located in subsurface atomic layers is visible in a scanning tunneling microscope (STM) experiment on the Ge(111)-(2 × 1) surface. The quasi-1D character of the impurity image, observed in STM experiments, is confirmed by our computer simulations with a note that a few π-bonded dimer rows may be affected by the presence of the impurity atom. We elaborate a model that allows classifying atoms on the experimental low-temperature STM image. We show the presence of spatial oscillations of the LDOS by the density-functional theory method. 7. An important atomic process in the CVD growth of graphene: Sinking and up-floating of carbon atom on copper surface International Nuclear Information System (INIS) Li, Yingfeng; Li, Meicheng; Gu, TianSheng; Bai, Fan; Yu, Yue; Trevor, Mwenya; Yu, Yangxin 2013-01-01 By density functional theory (DFT) calculations, the early stages of the growth of graphene on copper (1 1 1) surface are investigated. At the very first time of graphene growth, the carbon atom sinks into subsurface. As more carbon atoms are adsorbed nearby the site, the sunken carbon atom will spontaneously form a dimer with one of the newly adsorbed carbon atoms, and the formed dimer will up-float on the top of the surface. We emphasize the role of the co-operative relaxation of the co-adsorbed carbon atoms in facilitating the sinking and up-floating of carbon atoms. In detail: when two carbon atoms are co-adsorbed, their co-operative relaxation will result in different carbon–copper interactions for the co-adsorbed carbon atoms. This difference facilitates the sinking of a single carbon atom into the subsurface. As a third carbon atom is co-adsorbed nearby, it draws the sunken carbon atom on top of the surface, forming a dimer. Co-operative relaxations of the surface involving all adsorbed carbon atoms and their copper neighbors facilitate these sinking and up-floating processes. This investigation is helpful for the deeper understanding of graphene synthesis and the choosing of optimal carbon sources or process. 8. Atom International Nuclear Information System (INIS) Auffray, J.P. 1997-01-01 The atom through centuries, has been imagined, described, explored, then accelerated, combined...But what happens truly inside the atom? And what are mechanisms who allow its stability? Physicist and historian of sciences, Jean-Paul Auffray explains that these questions are to the heart of the modern physics and it brings them a new lighting. (N.C.) 9. Magnetic character of holmium atom adsorbed on platinum surface Czech Academy of Sciences Publication Activity Database Shick, Alexander; Shapiro, D.S.; Kolorenč, Jindřich; Lichtenstein, A.I. 2017-01-01 Roč. 7, č. 1 (2017), s. 1-6, č. článku 2751. ISSN 2045-2322 R&D Projects: GA ČR GC15-05872J Grant - others:GA MŠk(CZ) LM2015042 Institutional support: RVO:68378271 Keywords : rare-earth adatoms * density-functional theory * single-atom magnets Subject RIV: BM - Solid Matter Physics ; Magnetism OBOR OECD: Condensed matter physics (including formerly solid state physics, supercond.) Impact factor: 4.259, year: 2016 10. Chains of benzenes with lithium-atom adsorption: Vibrations and spontaneous symmetry breaking Science.gov (United States) Ortiz, Yenni P.; Stegmann, Thomas; Klein, Douglas J.; Seligman, Thomas H. 2017-09-01 We study effects of different configurations of adsorbates on the vibrational modes as well as symmetries of polyacenes and poly-p-phenylenes focusing on lithium atom adsorption. We found that the spectra of the vibrational modes distinguish the different configurations. For more regular adsorption schemes the lowest states are bending and torsion modes of the skeleton, which are essentially followed by the adsorbate. On poly-p-phenylenes we found that lithium adsorption reduces and often eliminates the torsion between rings thus increasing symmetry. There is spontaneous symmetry breaking in poly-p-phenylenes due to double adsorption of lithium atoms on alternating rings. 11. Atomic diffusion in laser surface modified AISI H13 steel Science.gov (United States) Aqida, S. N.; Brabazon, D.; Naher, S. 2013-07-01 This paper presents a laser surface modification process of AISI H13 steel using 0.09 and 0.4 mm of laser spot sizes with an aim to increase surface hardness and investigate elements diffusion in laser modified surface. A Rofin DC-015 diffusion-cooled CO2 slab laser was used to process AISI H13 steel samples. Samples of 10 mm diameter were sectioned to 100 mm length in order to process a predefined circumferential area. The parameters selected for examination were laser peak power, pulse repetition frequency (PRF), and overlap percentage. The hardness properties were tested at 981 mN force. Metallographic study and energy dispersive X-ray spectroscopy (EDXS) were performed to observe presence of elements and their distribution in the sample surface. Maximum hardness achieved in the modified surface was 1017 HV0.1. Change of elements composition in the modified layer region was detected in the laser modified samples. Diffusion possibly occurred for C, Cr, Cu, Ni, and S elements. The potential found for increase in surface hardness represents an important method to sustain tooling life. The EDXS findings signify understanding of processing parameters effect on the modified surface composition. 12. The impact of atomization on the surface composition of spray-dried milk droplets. Science.gov (United States) Foerster, Martin; Gengenbach, Thomas; Woo, Meng Wai; Selomulya, Cordelia 2016-04-01 The dominant presence of fat at the surface of spray-dried milk powders has been widely reported in the literature and described as resulting in unfavourable powder properties. The mechanism(s) causing this phenomenon are yet to be clearly identified. A systematic investigation of the component distribution in atomized droplets and spray-dried particles consisting of model milk systems with different fat contents demonstrated that atomization strongly influences the final surface composition. Cryogenic flash-freezing of uniform droplets from a microfluidic jet nozzle directly after atomization helped to distinguish the influence of the atomization stage from the drying stage. It was confirmed that the overrepresentation of fat on the surface is independent of the atomization technique, including a pressure-swirl single-fluid spray nozzle and a pilot-scale rotary disk spray dryer commonly used in industry. It is proposed that during the atomization stage a disintegration mechanism along the oil-water interface of the fat globules causes the surface predominance of fat. X-ray photoelectron spectroscopic measurements detected the outermost fat layer and some adjacent protein present on both atomized droplets and spray-dried particles. Confocal laser scanning microscopy gave a qualitative insight into the protein and fat distribution throughout the cross-sections, and confirmed the presence of a fat film along the particle surface. The film remained on the surface in the subsequent drying stage, while protein accumulated underneath, driven by diffusion. The results demonstrated that atomization induces component segregation and fat-rich surfaces in spray-dried milk powders, and thus these cannot be prevented by adjusting the spray drying conditions. Copyright © 2016 Elsevier B.V. All rights reserved. 13. Magnetic properties of a single iron atomic chain encapsulated in armchair carbon nanotubes: A Monte Carlo study Energy Technology Data Exchange (ETDEWEB) Masrour, R., E-mail: [email protected] [Laboratory of Materials, Processes, Environment and Quality, Cady Ayyed University, National School of Applied Sciences, PB 63, 46000 Safi (Morocco); Jabar, A. [Laboratory of Materials, Processes, Environment and Quality, Cady Ayyed University, National School of Applied Sciences, PB 63, 46000 Safi (Morocco); Hamedoun, M. [Institute of Nanomaterials and Nanotechnologies, MAScIR, Rabat (Morocco); Benyoussef, A. [Institute of Nanomaterials and Nanotechnologies, MAScIR, Rabat (Morocco); Hassan II Academy of Science and Technology, Rabat (Morocco); Hlil, E.K. [Institut Néel, CNRS, Université Grenoble Alpes, 25 rue des Martyrs BP 166, 38042 Grenoble cedex 9 (France) 2017-06-15 Highlights: • Magnetic properties of Fe atom chain wrapped in armchair carbon nanotubes have been studied. • Transition temperature of iron and carbon have been calculated using Monte Carlo simulations. • The multiples magnetic hysteresis have been found. - Abstract: The magnetic properties have been investigated of FeCu{sub x}C{sub 1−x} for a Fe atom chain wrapped in armchair (N,N) carbon nanotubes (N = 4,6,8,10,12) diluted by Cu{sup 2+} ions using Monte Carlo simulations. The thermal total magnetization and magnetic susceptibility are found. The reduced transition temperatures of iron and carbon have been calculated for different N and the exchange interactions. The total magnetization is obtained for different exchange interactions and crystal field. The Magnetic hysteresis cycles are obtained for different N, the reduced temperatures and exchange interactions. The multiple magnetic hysteresis is found. This system shows it can be used as magnetic nanostructure possessing potential current and future applications in permanent magnetism, magnetic recording and spintronics. 14. Structures of adsorbed CO on atomically smooth and on stepped sngle crystal surfaces International Nuclear Information System (INIS) 1980-01-01 The structures of molecular CO adsorbed on atomically smooth surfaces and on surfaces containing monatomic steps have been studied using the electron stimulated desorption ion angular distribution (ESDIAD) method. For CO adsorbed on the close packed Ru(001) and W(110) surfaces, the dominant bonding mode is via the carbon atom, with the CO molecular axis perpendicular to the plane of the surface. For CO on atomicaly rough Pd(210), and for CO adsorbed at step sites on four different surfaces vicinal to W(110), the axis of the molecule is tilted or inclined away from the normal to the surface. The ESDIAD method, in which ion desorption angles are related to surface bond angles, provides a direct determination of the structures of adsorbed molecules and molecular complexes on surfaces 15. Surface Magnetism of Cobalt Nanoislands Controlled by Atomic Hydrogen. Science.gov (United States) Park, Jewook; Park, Changwon; Yoon, Mina; Li, An-Ping 2017-01-11 Controlling the spin states of the surface and interface is key to spintronic applications of magnetic materials. Here, we report the evolution of surface magnetism of Co nanoislands on Cu(111) upon hydrogen adsorption and desorption with the hope of realizing reversible control of spin-dependent tunneling. Spin-polarized scanning tunneling microscopy reveals three types of hydrogen-induced surface superstructures, 1H-(2 × 2), 2H-(2 × 2), and 6H-(3 × 3), with increasing H coverage. The prominent magnetic surface states of Co, while being preserved at low H coverage, become suppressed as the H coverage level increases, which can then be recovered by H desorption. First-principles calculations reveal the origin of the observed magnetic surface states by capturing the asymmetry between the spin-polarized surface states and identify the role of hydrogen in controlling the magnetic states. Our study offers new insights into the chemical control of magnetism in low-dimensional systems. 16. Stripping scattering of fast atoms on surfaces of metal-oxide crystals and ultrathin films International Nuclear Information System (INIS) Blauth, David 2010-01-01 In the framework of the present dissertation the interactions of fast atoms with surfaces of bulk oxides, metals and thin films on metals were studied. The experiments were performed in the regime of grazing incidence of atoms with energies of some keV. The advantage of this scattering geometry is the high surface sensibility and thus the possibility to determine the crystallographic and electronic characteristics of the topmost surface layer. In addition to these experiments, the energy loss and the electron emission induced by scattered projectiles was investigated. The energy for electron emission and exciton excitation on Alumina/NiAl(110) and SiO 2 /Mo(112) are determined. By detection of the number of projectile induced emitted electrons as function of azimuthal angle for the rotation of the target surface, the geometrical structure of atoms forming the topmost layer of different adsorbate films on metal surfaces where determined via ion beam triangulation. (orig.) 17. Scattering of atomic and molecular ions from single crystal surfaces of Cu, Ag and Fe International Nuclear Information System (INIS) Zoest, J.M. van. 1986-01-01 This thesis deals with analysis of crystal surfaces of Cu, Ag and Fe with Low Energy Ion scattering Spectroscopy (LEIS). Different atomic and molecular ions with fixed energies below 7 keV are scattered by a metal single crystal (with adsorbates). The energy and direction of the scattered particles are analysed for different selected charge states. In that way information can be obtained concerning the composition and atomic and electronic structure of the single crystal surface. Energy spectra contain information on the composition of the surface, while structural atomic information is obtained by direction measurements (photograms). In Ch.1 a description is given of the experimental equipment, in Ch.2 a characterization of the LEIS method. Ch.3 deals with the neutralization of keV-ions in surface scattering. Two different ways of data interpretation are presented. First a model is treated in which the observed directional dependence of neutralization action of the first atom layer of the surface is presented by a laterally varying thickness of the neutralizing layer. Secondly it is shown that the data can be reproduced by a more realistic, physical model based on atomic transition matrix elements. In Ch.4 the low energy hydrogen scattering is described. The study of the dissociation of H 2 + at an Ag surface r0230ted in a model based on electronic dissociation, initialized by electron capture into a repulsive (molecular) state. In Ch.5 finally the method is applied to the investigation of the surface structure of oxidized Fe. (Auth.) 18. Liquid Atomization Induced by Pulse Laser Reflection underneath Liquid Surface Science.gov (United States) Utsunomiya, Yuji; Kajiwara, Takashi; Nishiyama, Takashi; Nagayama, Kunihito; Kubota, Shiro; Nakahara, Motonao 2009-05-01 We observed a novel effect of pulse laser reflection at the interface between transparent materials with different refractive indices. The electric field intensity doubles when a laser beam is completely reflected from a material with a higher refractive index to a material with a lower index. This effect appreciably reduces pulse laser ablation threshold of transparent materials. We performed experiments to observe the entire ablation process for laser incidence on the water-air interface using pulse laser shadowgraphy with high-resolution film; the minimum laser fluence for laser ablation at the water-air interface was approximately 12-16 J/cm2. We confirmed that this laser ablation occurs only when the laser beam is incident on the water-air interface from water. Many slender liquid ligaments extend like a milk crown and seem to be atomized at the tip. Their detailed structures can be resolved only by pulse laser photography using high-resolution film. 19. Structural and surface morphological studies of long chain fatty acid thin films deposited by Langmuir-Blodgett technique Energy Technology Data Exchange (ETDEWEB) Das, Nayan Mani, E-mail: [email protected] [Department of Applied Physics, Indian School of Mines, Dhanbad 826004 (India); Roy, Dhrubojyoti [Department of Applied Physics, Indian School of Mines, Dhanbad 826004 (India); Gupta, Mukul [UGC-DAE Consortium for Scientific Research, University Campus, Khandwa Road, Indore 452017 (India); Gupta, P.S. [Department of Applied Physics, Indian School of Mines, Dhanbad 826004 (India) 2012-12-15 In the present work we aim to study the structural and surface morphological characteristics of divalent cation (cadmium ion, Cd{sup 2+}) induced thin mono- to multilayer films of fatty acids such as arachidic acid and stearic acid prepared by the Langmuir-Blodgett (LB) technique. These ultra thin films of various numbers of layers were studied by X-ray diffraction (XRD), X-ray reflectivity (XRR) and Atomic Force Microscopy (AFM). In this specific Y-type deposition, it was found that as the individual layer thickness increases, the corresponding layer by layer interfacial electron density of the thin films decreases. Since the fatty acid chain tries to maintain its minimum value of cross-sectional area, tilting occurs with respect to its nearest neighbor. The tilt angle calculated for 9 layers of cadmium arachidate (CdA{sub 2}) and cadmium stearate (CdSt{sub 2}) are 18 Degree-Sign and 19.5 Degree-Sign , respectively. An asymmetric air gap of thickness {approx}3 A was also seen between the tail parts of 2 molecular chains. The RMS roughness and average height factors calculated through AFM studies show non-uniform surface morphology of both CdA{sub 2} and CdSt{sub 2}, although the calculated topographic variations were found to have more irregularity in case of CdSt{sub 2} than in case of CdA{sub 2}. 20. The effect of attractions on the structure of fused sphere chains confined between surfaces International Nuclear Information System (INIS) Patra, C.N.; Yethiraj, A.; Curro, J.G. 1999-01-01 The effect of attractive interactions on the behavior of polymers between surfaces is studied using Monte Carlo simulations. The molecules are modeled as fused sphere freely rotating chains with fixed bond lengths and bond angles; wall endash fluid and fluid endash fluid site endash site interaction potentials are of the hard sphere plus Yukawa form. For athermal chains the density at the surface (relative to the bulk) is depleted at low densities and enhanced at high densities. The introduction of a fluid endash fluid attraction causes a reduction of site density at the surface, and an introduction of a wall endash fluid attraction causes an enhancement of site density at the surface, compared to when these interactions are absent. When the wall endash fluid and fluid endash fluid attractions are of comparable strength, however, the depletion mechanism due to the fluid endash fluid attraction dominates. The center of mass profiles show the same trends as the site density profiles. Near the surface, the parallel and the perpendicular components of chain dimensions are different, which is explained in terms of a reorientation of chains. copyright 1999 American Institute of Physics. thinsp 1. Fabrication of ultrahydrophobic poly(lauryl acrylate) brushes on silicon wafer via surface-initiated atom transfer radical polymerization Energy Technology Data Exchange (ETDEWEB) Oztuerk, Esra; Turan, Eylem [Department of Chemistry, Faculty of Art and Science, Gazi University, 06500 Besevler, Ankara (Turkey); Caykara, Tuncer, E-mail: [email protected] [Department of Chemistry, Faculty of Art and Science, Gazi University, 06500 Besevler, Ankara (Turkey) 2010-11-15 In this report, ultrahydrophobic poly(lauryl acrylate) [poly(LA)] brushes were synthesized by surface-initiated atom transfer radical polymerization (SI-ATRP) of lauryl acrylate (LA) in N,N-dimethylformamide (DMF) at 90 deg. C. The formation of ultrahydrophobic poly(LA) films, whose thickness can be turned by changing polymerization time, is evidenced by using the combination of ellipsometry, X-ray photoelectron spectroscopy (XPS), grazing angle attenuated total reflectance-Fourier transform infrared spectroscopy (GATR-FTIR), atomic force microscopy (AFM), gel permeation chromatography (GPC), and water contact angle measurements. The SI-ATRP can be conducted in a well-controlled manner, as revealed by the linear kinetic plot, linear evolution of number-average molecular weights (M-bar{sub n}) versus monomer conversions, and the relatively narrow PDI (<1.28) of the grafted poly(LA) chains. The calculation of grafting parameters from experimental measurements indicated the synthesis of densely grafted poly(LA) films and allowed us to predict a 'brushlike' conformation for the chains in good solvent. The poly(LA) brushes exhibited high water contact angle of 163.3 {+-} 2.8{sup o}. 2. Atomic scale study of the chemistry of oxygen, hydrogen and water at SiC surfaces International Nuclear Information System (INIS) Amy, Fabrice 2007-01-01 Understanding the achievable degree of homogeneity and the effect of surface structure on semiconductor surface chemistry is both academically challenging and of great practical interest to enable fabrication of future generations of devices. In that respect, silicon terminated SiC surfaces such as the cubic 3C-SiC(1 0 0) 3 x 2 and the hexagonal 6H-SiC(0 0 0 1) 3 x 3 are of special interest since they give a unique opportunity to investigate the role of surface morphology on oxygen or hydrogen incorporation into the surface. In contrast to silicon, the subsurface structure plays a major role in the reactivity, leading to unexpected consequences such as the initial oxidation starting several atomic planes below the top surface or the surface metallization by atomic hydrogen. (review article) 3. Dynamics of a Rydberg hydrogen atom near a metal surface in the electron-extraction scheme Energy Technology Data Exchange (ETDEWEB) Iñarrea, Manuel [Área de Física Aplicada, Universidad de La Rioja, Logroño (Spain); Lanchares, Víctor [Departamento de Matemáticas y Computación, Universidad de La Rioja, Logroño, La Rioja (Spain); Palacián, Jesús [Departamento de Ingeniería Matemática e Informática, Universidad Pública de Navarra, Pamplona (Spain); Pascual, Ana I. [Departamento de Matemáticas y Computación, Universidad de La Rioja, Logroño, La Rioja (Spain); Salas, J. Pablo, E-mail: [email protected] [Área de Física Aplicada, Universidad de La Rioja, Logroño (Spain); Yanguas, Patricia [Departamento de Ingeniería Matemática e Informática, Universidad Pública de Navarra, Pamplona (Spain) 2015-01-23 We study the classical dynamics of a Rydberg hydrogen atom near a metal surface in the presence of a constant electric field in the electron-extraction situation [1], e.g., when the field attracts the electron to the vacuum. From a dynamical point of view, this field configuration provides a dynamics richer than in the usual ion-extraction scheme, because, depending on the values of field and the atom–surface distance, the atom can be ionized only towards the metal surface, only to the vacuum or to the both sides. The evolution of the phase space structure as a function of the atom–surface distance is explored in the bound regime of the atom. In the high energy regime, the ionization mechanism is also investigated. We find that the classical results of this work are in good agreement with the results obtained in the wave-packet propagation study carried out by So et al. [1]. - Highlights: • We study a classical hydrogen atom near a metal surface plus a electric field. • We explore the phase space structure as a function of the field strength. • We find most of the electronic orbits are oriented along the field direction. • We study the ionization of the atom for several atom–surface distances. • This classical study is in good agreement with the quantum results. 4. SASP '86: Symposium on atomic and surface physics International Nuclear Information System (INIS) Howorka, F.; Lindinger, W.; Maerk, T.D. 1986-02-01 71 papers are presented on subject matters indicated in the section headings: 1) Ion-neutral and neutral-neutral interactions in the gas phase; 2) Laser physics and photonics; 3) Electron collisions and electronic capture; 4) Ion-surface interaction and plasma-related effects; 5) Cluster physics. 70 thereof are of INIS interested and are treated separately. (G.Q.) 5. Surface forces studied with colloidal probe atomic force microscopy NARCIS (Netherlands) Giesbers, M. 2001-01-01 Forces between surfaces are a determining factor for the performance of natural as well as synthetic colloidal systems, and play a crucial role in industrial production processes. Measuring these forces is a scientific and experimental challenge and over the years several techniques have 6. Chains of benzenes with lithium-atom adsorption: Vibrations and spontaneous symmetry breaking OpenAIRE Ortiz, Yenni P.; Stegmann, Thomas; Klein, Douglas J.; Seligman, Thomas H. 2016-01-01 We study effects of different configurations of adsorbates on the vibrational modes as well as symmetries of polyacenes and poly-p-phenylenes focusing on lithium atom adsorption. We found that the spectra of the vibrational modes distinguish the different configurations. For more regular adsorption schemes the lowest states are bending and torsion modes of the skeleton, which are essentially followed by the adsorbate. On poly-p-phenylenes we found that lithium adsorption reduces and often eli... 7. He atom surface scattering: Surface dynamics of insulators, overlayers and crystal growth International Nuclear Information System (INIS) 1992-01-01 Investigations have focused primarily on surface structure and dynamics of ionic insulators, epitaxial growth onto alkali halide crystals and multiphoton studies. The surface dynamics of RbCl has been re-examined. We have developed a simple force constant model which provides insight into the dynamics of KBr overlayers on NaCl(001), a system with a large lattice mismatch. The KBr/NaCl(001) results are compared to Na/Cu(001) and NaCl/Ge(001). We have completed epitaxial growth experiments for KBr onto RbCl(001). Slab dynamics calculations using a shell model for this system with very small lattice mismatch are being carried out in collaboration with Professor Manson of Clemson University and with Professor Schroeder in Regensburg, Germany. Extensive experiments on multiphoton scattering of helium atoms onto NaCl and, particularly, LiF have been carried out and the theory has been developed to a rather advanced stage by Professor Manson. This work will permit the extraction of more information from time-of-flight spectra. It is shown that the theoretical model provides a very good description of the multiphoton scattering from organic films. Work has started on self-assembling organic films on gold (alkyl thiols/Au(111)). We have begun to prepare and characterize the gold crystal; one of the group members has spent two weeks at the Oak Ridge National Laboratory learning the proper Au(111) preparation techniques. One of our students has carried out neutron scattering experiments on NiO, measuring both bulk phonon and magnon dispersion curves 8. Surface modification of polystyrene with atomic oxygen radical anions-dissolved solution International Nuclear Information System (INIS) Wang Lian; Yan Lifeng; Zhao Peitao; Torimoto, Yoshifumi; Sadakata, Masayoshi; Li Quanxin 2008-01-01 A novel approach to surface modification of polystyrene (PS) polymer with atomic oxygen radical anions-dissolved solution (named as O - water) has been investigated. The O - water, generated by bubbling of the O - (atomic oxygen radical anion) flux into the deionized water, was characterized by UV-absorption spectroscopy and electron paramagnetic resonance (EPR) spectroscopy. The O - water treatments caused an obvious increase of the surface hydrophilicity, surface energy, surface roughness and also caused an alteration of the surface chemical composition for PS surfaces, which were indicated by the variety of contact angle and material characterization by atomic force microscope (AFM) imaging, field emission scanning electron microscopy (FESEM), X-ray photoelectron spectroscopy (XPS), and attenuated total-reflection Fourier transform infrared (ATR-FTIR) measurements. Particularly, it was found that some hydrophilic groups such as hydroxyl (OH) and carbonyl (C=O) groups were introduced onto the polystyrene surfaces via the O - water treatment, leading to the increases of surface hydrophilicity and surface energy. The active oxygen species would react with the aromatic ring molecules on the PS surfaces and decompose the aromatic compounds to produce hydrophilic hydroxyl and carbonyl compounds. In addition, the O - water is also considered as a 'clean solution' without adding any toxic chemicals and it is easy to be handled at room temperature. Present method may suit to the surface modification of polymers and other heat-sensitive materials potentially 9. Adsorption of atomic oxygen (N2O) on a clean Ge(001) surface NARCIS (Netherlands) Zandvliet, Henricus J.W.; Keim, Enrico G.; van Silfhout, Arend 1990-01-01 We present the results of a study concerning the interaction of atomic oxygen (as released by decomposition of N2O ) with the clean Ge(001)2×1 surface at 300 K. Ellipsometry in the photon energy range of 1.5–4 eV, surface conductance measurements and Auger electron spectroscopy(AES) have been used 10. Scattering of hyperthermal argon atoms from clean and D-covered Ru surfaces NARCIS (Netherlands) Ueta, H.; Gleeson, M.A.; Kleyn, A.W. 2011-01-01 Hyperthermal Ar atoms were scattered from a Ru(0001) surface held at temperatures of 180, 400 and 600 K, and from a Ru(0001)-(1×1)D surface held at 114 and 180 K. The resultant angular intensity and energy distributions are complex. The in-plane angular distributions have narrow (FWHM ≤ 10°) 11. Design of Rotary Atomizer Using Characteristics of Thin Film Flow on Solid Surfaces Energy Technology Data Exchange (ETDEWEB) Park, Boo Seong; Kim, Bo Hung [Univ. of Ulsan, Ulsan (Korea, Republic of) 2013-12-15 A disc-type rotary atomizer affords advantages such as superior paint transfer efficiency, uniformity of paint pattern and particle size, and less consumption of compressed air compared to a spray-gun-type atomizer. Furthermore, it can be applied to all types of painting materials, and it is suitable for large-scale processes such as car painting. The painting quality, which is closely related to the atomizer performance, is determined by the uniformity and droplet size in accordance with the design of the bell disc surface. This study establishes the basics of how to design a surface by modeling the operating bell disc's RPM, diameter, surface angle, and film thickness considering dye characteristics such as the viscosity, density, and surface affinity. 12. Selective propagation and beam splitting of surface plasmons on metallic nanodisk chains. Science.gov (United States) Hu, Yuhui; Zhao, Di; Wang, Zhenghan; Chen, Fei; Xiong, Xiang; Peng, Ruwen; Wang, Mu 2017-05-01 Manipulating the propagation of surface plasmons (SPs) on a nanoscale is a fundamental issue of nanophotonics. By using focused electron beam, SPs can be excited with high spatial accuracy. Here we report on the propagation of SPs on a chain of gold nanodisks with cathodoluminescence (CL) spectroscopy. Experimental evidence for the propagation of SPs excited by the focused electron beam is demonstrated. The wavelength of the transmitted SPs depends on the geometrical parameters of the nanodisk chain. Furthermore, we design and fabricate a beam splitter, which selectively transmits SPs of certain wavelengths to a specific direction. By scanning the sample surface point by point and collecting the CL spectra, we obtain the spectral mapping and identify that the chain of the smaller nanodisks can efficiently transport SPs at shorter wavelengths. This Letter provides a unique approach to manipulate in-plane propagation of SPs. 13. Optical characterization of gold chains and steps on the vicinal Si(557) surface: Theory and experiment Energy Technology Data Exchange (ETDEWEB) Hogan, Conor [Consiglio Nazionale delle Ricerche, Istituto di Struttura della Materia, via Fosso del Cavaliere 100, 00133 Rome (Italy); Department of Physics and European Theoretical Spectroscopy Facility (ETSF), University of Rome ' ' Tor Vergata' ' , Via della Ricerca Scientifica 1, 00133 Rome (Italy); McAlinden, Niall; McGilp, John F. [School of Physics, Trinity College Dublin, Dublin 2 (Ireland) 2012-06-15 We present a joint experimental-theoretical study of the reflectance anisotropy of clean and gold-covered Si(557), a vicinal surface of Si(111) upon which gold forms quasi-one-dimensional (1D) chains parallel to the steps. By means of first-principles calculations, we analyse the close relationship between the various surface structural motifs and the optical properties. Good agreement is found between experimental and computed spectra of single-step models of both clean and Au-adsorbed surfaces. Spectral fingerprints of monoatomic gold chains and silicon step edges are identified. The role of spin-orbit coupling (SOC) on the surface optical properties is examined, and found to have little effect. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) 14. Anchoring of alkyl chain molecules on oxide surface using silicon alkoxide Energy Technology Data Exchange (ETDEWEB) Narita, Ayumi, E-mail: [email protected] [Quantum Beam Science Directorate, Japan Atomic Energy Agency, Tokai-mura, Naka-gun, Ibaraki-ken 319-1195 (Japan); Graduate School of Science and Engineering, Ibaraki University, Bunnkyo, Mito-shi, Ibaraki-ken 310-8512 (Japan); Baba, Yuji; Sekiguchi, Tetsuhiro; Shimoyama, Iwao; Hirao, Norie [Quantum Beam Science Directorate, Japan Atomic Energy Agency, Tokai-mura, Naka-gun, Ibaraki-ken 319-1195 (Japan); Yaita, Tsuyoshi [Quantum Beam Science Directorate, Japan Atomic Energy Agency, Tokai-mura, Naka-gun, Ibaraki-ken 319-1195 (Japan); Graduate School of Science and Engineering, Ibaraki University, Bunnkyo, Mito-shi, Ibaraki-ken 310-8512 (Japan) 2012-01-01 Chemical states of the interfaces between octadecyl-triethoxy-silane (ODTS) molecules and sapphire surface were measured by X-ray photoelectron spectroscopy (XPS) and near edge X-ray absorption fine structure (NEXAFS) using synchrotron soft X-rays. The nearly self-assembled monolayer of ODTS was formed on the sapphire surface. For XPS and NEXAFS measurements, it was elucidated that the chemical bond between silicon alkoxide in ODTS and the surface was formed, and the alkane chain of ODTS locates upper side on the surface. As a result, it was elucidated that the silicon alkoxide is a good anchor for the immobilization of organic molecules on oxides. 15. Conformational explosion: Understanding the complexity of short chain para-dialkylbenzene potential energy surfaces Science.gov (United States) Mishra, Piyush; Hewett, Daniel M.; Zwier, Timothy S. 2018-05-01 The single-conformation ultraviolet and infrared spectroscopy of three short-chain para-dialkylbenzenes (para-diethylbenzene, para-dipropylbenzene, and para-dibutylbenzene) is reported for the jet-cooled, isolated molecules. The present study builds off previous work on single-chain n-alkylbenzenes, where an anharmonic local mode Hamiltonian method was developed to account for stretch-bend Fermi resonance in the alkyl CH stretch region [D. P. Tabor et al., J. Chem. Phys. 144, 224310 (2016)]. The jet-cooled molecules are interrogated using laser-induced fluorescence (LIF) excitation, fluorescence dip infrared spectroscopy, and dispersed fluorescence. The LIF spectra in the S1 ← S0 origin region show a dramatic increase in the number of resolved transitions with increasing length of the alkyl chains, reflecting an explosion in the number of unique low-energy conformations formed when two independent alkyl chains are present. Since the barriers to isomerization of the alkyl chain are similar in size, this results in an "egg carton" shaped potential energy surface. A combination of electronic frequency shift and alkyl CH stretch infrared spectra is used to generate a consistent set of conformational assignments. Using these experimental techniques in conjunction with computational methods, subsets of origin transitions in the LIF excitation spectrum can be classified into different conformational families. Two conformations are resolved in para-diethylbenzene, seven in para-dipropylbenzene, and about nineteen in para-dibutylbenzene. These chains are largely independent of each other as there are no new single-chain conformations induced by the presence of a second chain. A cursory LIF excitation scan of para-dioctylbenzene shows a broad congested spectrum at frequencies consistent with interactions of alkyl chains with the phenyl π cloud. 16. Long-chain alkaenone unsaturation index as sea surface temperature proxy in southwest Bay of Bengal Digital Repository Service at National Institute of Oceanography (India) Sarma, N.S.; Pasha, S.K.G.; SriRamKrishna, M.; Shirodkar, P.V.; Yadava, M.G.; Rao, K.M. As a proxy of the sea surface temperature (SST), C sub(37) long-chain alkenones (LCAs) preserved in sediments of the southwestern Bay of Bengal and dating back to the last glacial period, were identified in SIM GC-EI MS spectra run at m/z 530... 17. Surface water retardation around single-chain polymeric nanoparticles: critical for catalytic function? Science.gov (United States) Stals, Patrick J M; Cheng, Chi-Yuan; van Beek, Lotte; Wauters, Annelies C; Palmans, Anja R A; Han, Songi; Meijer, E W 2016-03-01 A library of water-soluble dynamic single-chain polymeric nanoparticles (SCPN) was prepared using a controlled radical polymerisation technique followed by the introduction of functional groups, including probes at targeted positions. The combined tools of electron paramagnetic resonance (EPR) and Overhauser dynamic nuclear polarization (ODNP) reveal that these SCPNs have structural and surface hydration properties resembling that of enzymes. 18. The Luttinger liquid in superlattice structures: atomic gases, quantum dots and the classical Ising chain International Nuclear Information System (INIS) Bhattacherjee, Aranya B; Jha, Pradip; Kumar, Tarun; Mohan, Man 2011-01-01 We study the physical properties of a Luttinger liquid in a superlattice that is characterized by alternating two tunneling parameters. Using the bosonization approach, we describe the corresponding Hubbard model by the equivalent Tomonaga-Luttinger model. We analyze the spin-charge separation and transport properties of the superlattice system. We suggest that cold Fermi gases trapped in a bichromatic optical lattice and coupled quantum dots offer the opportunity to measure these effects in a convenient manner. We also study the classical Ising chain with two tunneling parameters. We find that the classical two-point correlator decreases as the difference between the two tunneling parameters increases. 19. Resonance studies of H atoms adsorbed on frozen H2 surfaces International Nuclear Information System (INIS) Crampton, S.B. 1980-01-01 Observations are reported of the ground state hyperfine resonance of hydrogen atoms stored in a 5 cm. diameter bottle coated with frozen molecular hydrogen. Dephasing of the hyperfine resonance while the atoms are adsorbed produces frequency shifts which vary by a factor of two over the temperature range 3.7 K to 4.6 K and radiative decay rates which vary by a factor of five over this range. The magnitudes and temperature dependences of the frequency shifts and decay rates are consistent with a non-uniform distribution of surface adsorption energies with mean about 38(8) K, in agreement with theoretical estimates for a smooth surface. Extrapolation of the 30 nanosec. mean adsorption times at 4.2 K predicts very long adsorption times for H on H 2 below 1 K. Studies of level population recovery rates provide evidence for surface electron spin exchange collisions between adsorbed atoms with collision duration long compared to the hyperfine period, suggesting that the atoms are partially mobile on the surface. The lowest rates observed for level population recovery set a lower limit of about 500 atom-surface collisions at 4.2 K without recombination 20. Dynamical interaction of He atoms with metal surfaces: Charge transfer processes International Nuclear Information System (INIS) Flores, F.; Garcia Vidal, F.J.; Monreal, R. 1993-01-01 A self-consistent Kohn-Sham LCAO method is presented to calculate the charge transfer processes between a He * -atom and metal surfaces. Intra-atomic correlation effects are taken into account by considering independently each single He-orbital and by combining the different charge transfer processes into a set of dynamical rate equations for the different ion charge fractions. Our discussion reproduces qualitatively the experimental evidence and gives strong support to the method presented here. (author). 24 refs, 4 figs 1. Spatial and energy distributions of satellite-speed helium atoms reflected from satellite-type surfaces International Nuclear Information System (INIS) Liu, S.M.; Rodgers, W.E.; Knuth, E.L. 1977-01-01 Interactions of satellite-speed helium atoms (accelerated in an expansion from an arc-heated supersonic-molecular-beam source) with practical satellite surfaces have been investigated experimentally. The density and energy distributions of the scattered atoms were measured using a detection system developed for this study. This detection system includes (a) a target positioning mechanism, (b) a detector rotating mechanism, and (c) a mass spectrometer and/or a retarding-field energy analyzer. (Auth.) 2. Passivation of CdZnTe surfaces by oxidation in low energy atomic oxygen International Nuclear Information System (INIS) Chen, H.; Chattopadhyay, K.; Chen, K.; Burger, A.; George, M.A.; Gregory, J.C.; Nag, P.K.; Weimer, J.J.; James, R.B. 1999-01-01 A method of surface passivation of Cd 1-x Zn x Te (CZT) x-ray and gamma ray detectors has been established by using microwave-assisted atomic oxygen bombardment. Detector performance is significantly enhanced due to the reduction of surface leakage current. CZT samples were exposed to an atomic oxygen environment at the University of Alabama in Huntsville close-quote s Thermal Atomic Oxygen Facility. This system generates neutral atomic oxygen species with kinetic energies of 0.1 - 0.2 eV. The surface chemical composition and its morphology modification due to atomic oxygen exposure were studied by x-ray photoelectron spectroscopy and atomic force microscopy and the results were correlated with current-voltage measurements and with room temperature spectral responses to 133 Ba and 241 Am radiation. A reduction of leakage current by about a factor of 2 is reported, together with significant improvement in the gamma-ray line resolution. copyright 1999 American Vacuum Society 3. Applications of IBSOM and ETEM for solving the nonlinear chains of atoms with long-range interactions Science.gov (United States) Foroutan, Mohammadreza; Zamanpour, Isa; Manafian, Jalil 2017-10-01 This paper presents a number of new solutions obtained for solving a complex nonlinear equation describing dynamics of nonlinear chains of atoms via the improved Bernoulli sub-ODE method (IBSOM) and the extended trial equation method (ETEM). The proposed solutions are kink solitons, anti-kink solitons, soliton solutions, hyperbolic solutions, trigonometric solutions, and bellshaped soliton solutions. Then our new results are compared with the well-known results. The methods used here are very simple and succinct and can be also applied to other nonlinear models. The balance number of these methods is not constant contrary to other methods. The proposed methods also allow us to establish many new types of exact solutions. By utilizing the Maple software package, we show that all obtained solutions satisfy the conditions of the studied model. More importantly, the solutions found in this work can have significant applications in Hamilton's equations and generalized momentum where solitons are used for long-range interactions. 4. Localization of cesium on montmorillonite surface investigated by frequency modulation atomic force microscopy Science.gov (United States) Araki, Yuki; Satoh, Hisao; Okumura, Masahiko; Onishi, Hiroshi 2017-11-01 Cation exchange of clay mineral is typically analyzed without microscopic study of the clay surfaces. In order to reveal the distribution of exchangeable cations at the clay surface, we performed in situ atomic-scale observations of the surface changes in Na-rich montmorillonite due to exchange with Cs cations using frequency modulation atomic force microscopy (FM-AFM). Lines of protrusion were observed on the surface in aqueous CsCl solution. The amount of Cs of the montmorillonite particles analyzed by energy dispersive X-ray spectrometry was consistent with the ratio of the number of linear protrusions to all protrusions in the FM-AFM images. The results showed that the protrusions represent adsorbed Cs cations. The images indicated that Cs cations at the surface were immobile, and their occupancy remained constant at 10% of the cation sites at the surface with different immersion times in the CsCl solution. This suggests that the mobility and the number of Cs cations at the surface are controlled by the permanent charge of montmorillonite; however, the Cs distribution at the surface is independent of the charge distribution of the inner silicate layer. Our atomic-scale observations demonstrate that surface cations are distributed in different ways in montmorillonite and mica. 5. Atomic structure of diamond {111} surfaces etched in oxygen water vapor International Nuclear Information System (INIS) Theije, F.K. de; Reedijk, M.F.; Arsic, J.; Enckevort, W.J.P. van; Vlieg, E. 2001-01-01 The atomic structure of the {111} diamond face after oxygen-water-vapor etching is determined using x-ray scattering. We find that a single dangling bond diamond {111} surface model, terminated by a full monolayer of -OH fits our data best. To explain the measurements it is necessary to add an ordered water layer on top of the -OH terminated surface. The vertical contraction of the surface cell and the distance between the oxygen atoms are generally in agreement with model calculations and results on similar systems. The OH termination is likely to be present during etching as well. This model experimentally confirms the atomic-scale mechanism we proposed previously for this etching system 6. Observation of modified radiative properties of cold atoms in vacuum near a dielectric surface International Nuclear Information System (INIS) Ivanov, V V; Cornelussen, R A; Heuvell, H B van Linden van den; Spreeuw, R J C 2004-01-01 We have observed a distance-dependent absorption linewidth of cold 87 Rb atoms close to a dielectric-vacuum interface. This is the first observation of modified radiative properties in vacuum near a dielectric surface. A cloud of cold atoms was created using a magneto-optical trap (MOT) and optical molasses cooling. Evanescent waves (EW) were used to observe the behaviour of the atoms near the surface. We observed an increase of the absorption linewidth by up to 25% with respect to the free-space value. Approximately half the broadening can be explained by cavity quantum electrodynamics (CQED) as an increase of the natural linewidth and inhomogeneous broadening. The remainder we attribute to local Stark shifts near the surface. By varying the characteristic EW length we have observed a distance dependence characteristic for CQED 7. Quantum trajectories in elastic atom-surface scattering: threshold and selective adsorption resonances. Science.gov (United States) Sanz, A S; Miret-Artés, S 2005-01-01 The elastic resonant scattering of He atoms off the Cu(117) surface is fully described with the formalism of quantum trajectories provided by Bohmian mechanics. Within this theory of quantum motion, the concept of trapping is widely studied and discussed. Classically, atoms undergo impulsive collisions with the surface, and then the trapped motion takes place covering at least two consecutive unit cells. However, from a Bohmian viewpoint, atom trajectories can smoothly adjust to the equipotential energy surface profile in a sort of sliding motion; thus the trapping process could eventually occur within one single unit cell. In particular, both threshold and selective adsorption resonances are explained by means of this quantum trapping considering different space and time scales. Furthermore, a mapping between each region of the (initial) incoming plane wave and the different parts of the diffraction and resonance patterns can be easily established, an important issue only provided by a quantum trajectory formalism. (c) 2005 American Institute of Physics. 8. On the Debye-Waller factor in atom-surface scattering International Nuclear Information System (INIS) Garcia, N.; Maradudin, A.A.; Celli, V. 1982-01-01 A theory for the Debye-Waller factor in atom-surface scattering is presented, to lowest order in the phonon contributions. Multiple-scattering effects as well as the cross-correlated surface atom displacements are included. The theory accounts for experimental data without the necessity of introducing the Armand effect, which is due to the finite size of the incident atom. The work presented here implies that the Kirchhoff approximation fails when the energy of the incident particle is in the energy range of the phonon spectrum. The results of the calculation are presented in the high-temperature limit, and it is observed that the Rayleigh surface phonons contribute three-quarters of the Debye-Waller factor, while the bulk phonons account for the rest. This result is interesting because the calculation of the former contribution is simpler than that of the latter. (author) 9. Adsorption of a single polymer chain on a surface: effects of the potential range. Science.gov (United States) Klushin, Leonid I; Polotsky, Alexey A; Hsu, Hsiao-Ping; Markelov, Denis A; Binder, Kurt; Skvortsov, Alexander M 2013-02-01 We investigate the effects of the range of adsorption potential on the equilibrium behavior of a single polymer chain end-attached to a solid surface. The exact analytical theory for ideal lattice chains interacting with a planar surface via a box potential of depth U and width W is presented and compared to continuum model results and to Monte Carlo (MC) simulations using the pruned-enriched Rosenbluth method for self-avoiding chains on a simple cubic lattice. We show that the critical value U(c) corresponding to the adsorption transition scales as W(-1/ν), where the exponent ν=1/2 for ideal chains and ν≈3/5 for self-avoiding walks. Lattice corrections for finite W are incorporated in the analytical prediction of the ideal chain theory U(c)≈(π(2)/24)(W+1/2)(-2) and in the best-fit equation for the MC simulation data U(c)=0.585(W+1/2)(-5/3). Tail, loop, and train distributions at the critical point are evaluated by MC simulations for 1≤W≤10 and compared to analytical results for ideal chains and with scaling theory predictions. The behavior of a self-avoiding chain is remarkably close to that of an ideal chain in several aspects. We demonstrate that the bound fraction θ and the related properties of finite ideal and self-avoiding chains can be presented in a universal reduced form: θ(N,U,W)=θ(NU(c),U/U(c)). By utilizing precise estimations of the critical points we investigate the chain length dependence of the ratio of the normal and lateral components of the gyration radius. Contrary to common expectations this ratio attains a limiting universal value /=0.320±0.003 only at N~5000. Finite-N corrections for this ratio turn out to be of the opposite sign for W=1 and for W≥2. We also study the N dependence of the apparent crossover exponent φ(eff)(N). Strong corrections to scaling of order N(-0.5) are observed, and the extrapolated value φ=0.483±0.003 is found for all values of W. The strong correction to scaling effects found here explain why 10. Grazing incidence collisions of ions and atoms with surfaces: from charge exchange to atomic diffraction; Collisions rasantes d'ions ou d'atomes sur les surfaces: de l'echange de charge a la diffraction atomique Energy Technology Data Exchange (ETDEWEB) Rousseau, P 2006-09-15 This thesis reports two studies about the interaction with insulating surfaces of keV ions or atoms under grazing incidence. The first part presents a study of charge exchange processes occurring during the interaction of singly charged ions with the surface of NaCl. In particular, by measuring the scattered charge fraction and the energy loss in coincidence with electron emission, the neutralization mechanism is determined for S{sup +}, C{sup +}, Xe{sup +}, H{sup +}, O{sup +}, Kr{sup +}, N{sup +}, Ar{sup +}, F{sup +}, Ne{sup +} and He{sup +}. These results show the importance of the double electron capture as neutralization process for ions having too much potential energy for resonant capture and not enough for Auger neutralization. We have also studied the ionisation of the projectile and of the surface, and the different Auger-like neutralization processes resulting in electron emission, population of conduction band or excited state. For oxygen scattering, we have measured an higher electron yield in coincidence with scattered negative ion than with scattered atom suggesting the transient formation above the surface of the oxygen doubly negative ion. The second study deals with the fast atom diffraction, a new phenomenon observed for the first time during this work. Due to the large parallel velocity, the surface appears as a corrugated wall where rows interfere. Similarly to the Thermal Atom Scattering the diffraction pattern corresponds to the surface potential and is sensitive to vibrations. We have study the H-NaCl and He-LiF atom-surface potentials in the 20 meV - 1 eV range. This new method offers interesting perspectives for surface characterisation. (author) 11. Atomic Step Formation on Sapphire Surface in Ultra-precision Manufacturing Science.gov (United States) Wang, Rongrong; Guo, Dan; Xie, Guoxin; Pan, Guoshun 2016-01-01 Surfaces with controlled atomic step structures as substrates are highly relevant to desirable performances of materials grown on them, such as light emitting diode (LED) epitaxial layers, nanotubes and nanoribbons. However, very limited attention has been paid to the step formation in manufacturing process. In the present work, investigations have been conducted into this step formation mechanism on the sapphire c (0001) surface by using both experiments and simulations. The step evolutions at different stages in the polishing process were investigated with atomic force microscopy (AFM) and high resolution transmission electron microscopy (HRTEM). The simulation of idealized steps was constructed theoretically on the basis of experimental results. It was found that (1) the subtle atomic structures (e.g., steps with different sawteeth, as well as steps with straight and zigzag edges), (2) the periodicity and (3) the degree of order of the steps were all dependent on surface composition and miscut direction (step edge direction). A comparison between experimental results and idealized step models of different surface compositions has been made. It has been found that the structure on the polished surface was in accordance with some surface compositions (the model of single-atom steps: Al steps or O steps). PMID:27444267 12. Hydrogen atom addition to the surface of graphene nanoflakes: A density functional theory study Energy Technology Data Exchange (ETDEWEB) Tachikawa, Hiroto, E-mail: [email protected] 2017-02-28 Highlights: • The reaction pathway of the hydrogen addition to graphene surface was determined by the DFT method. • Binding energies of atomic hydrogen to graphene surface were determined. • Absorption spectrum of hydrogenated graphene was theoretically predicted. • Hyperfine coupling constant of hydrogenated graphene was theoretically predicted. - Abstract: Polycyclic aromatic hydrocarbons (PAHs) provide a 2-dimensional (2D) reaction surface in 3-dimensional (3D) interstellar space and have been utilized as a model of graphene surfaces. In the present study, the reaction of PAHs with atomic hydrogen was investigated by means of density functional theory (DFT) to systematically elucidate the binding nature of atomic hydrogen to graphene nanoflakes. PAHs with n = 4–37 were chosen, where n indicates the number of benzene rings. Activation energies of hydrogen addition to the graphene surface were calculated to be 5.2–7.0 kcal/mol at the CAM-B3LYP/6-311G(d,p) level, which is almost constant for all PAHs. The binding energies of hydrogen atom were slightly dependent on the size (n): 14.8–28.5 kcal/mol. The absorption spectra showed that a long tail is generated at the low-energy region after hydrogen addition to the graphene surface. The electronic states of hydrogenated graphenes were discussed on the basis of theoretical results. 13. Surface adhesion properties of graphene and graphene oxide studied by colloid-probe atomic force microscopy International Nuclear Information System (INIS) Ding Yanhuai; Zhang Ping; Ren Huming; Zhuo Qin; Yang Zhongmei; Jiang Xu; Jiang Yong 2011-01-01 Surface adhesion properties are important to various applications of graphene-based materials. Atomic force microscopy is powerful to study the adhesion properties of samples by measuring the forces on the colloidal sphere tip as it approaches and retracts from the surface. In this paper we have measured the adhesion force between the colloid probe and the surface of graphene (graphene oxide) nanosheet. The results revealed that the adhesion force on graphene and graphene oxide surface were 66.3 and 170.6 nN, respectively. It was found the adhesion force was mainly determined by the water meniscus, which was related to the surface contact angle of samples. 14. Surface coverage of Pt atoms on PtCo nanoparticles and catalytic kinetics for oxygen reduction Energy Technology Data Exchange (ETDEWEB) Jiang Rongzhong, E-mail: [email protected] [Sensors and Electron Devices Directorate, U.S. Army Research Laboratory, 2800 Powder Mill Road, Adelphi, MD 20783-1197 (United States); Rong, Charles; Chu, Deryn [Sensors and Electron Devices Directorate, U.S. Army Research Laboratory, 2800 Powder Mill Road, Adelphi, MD 20783-1197 (United States) 2011-02-01 The surface coverage of Pt atoms on PtCo nanoparticles and its effect on catalytic kinetics for oxygen reduction were investigated. The PtCo nanoparticles with different surface coverage of Pt atoms were synthesized with various methods, including normal chemical method, microemulsion synthesis, and ultrasound-assisted microemulsion. A model of Pt atoms filling into a spherical nanoparticle was proposed to explain the relationship of surface metal atoms and nanoparticle size. The catalytic activity of the PtCo nano-particles is highly dependent on the synthetic methods, even if they have the same chemical composition. The PtCo nano-particles synthesized with ultrasound-assisted microemulsion showed the highest activity, which is attributed to an increase of active surface coverage of Pt atoms on the metal nanoparticles. The rate of oxygen reduction at 0.5 V (vs. SCE) catalyzed by the PtCo synthesized with ultrasound-assisted micro-emulsion was about four times higher than that of the PtCo synthesized with normal chemical method. As demonstrated with rotating-ring disk electrode measurement, the PtCo nano-particles can catalyze oxygen 4-electron reduction to water without intermediate H{sub 2}O{sub 2} detected. 15. The kinetics of formation and transformation of silver atoms on solid surfaces subjected to ionizing irradiation International Nuclear Information System (INIS) Popovich, G.M. 1988-01-01 The paper discusses the results obtained in ESR-assisted studies of the kinetics of formation and transformation of silver atoms generated by γ-irradiation of silver-containing carriers. Three types of dependences have been established: (1) extreme; (2) saturation curves and (3) step-like. All the kinetic curves display, after a definite period of time, stable concentrations of adsorbed silver atoms per unit of the surface at a given temperature. Depending on the temperature of the experiment, the composition and nature of the carrier, the number of adsorbed silver ions, the irradiation dose and conditions of the experiment, a stable concentration of silver atoms at a given temperature may be equal to, higher or lower than the number of silver atoms measured immediately after γ-irradiation at a temperature of liquid nitrogen. A kinetic scheme is proposed to explain the obtained curves. The model suggests that the silver atoms adsorbed on the surface, as well as those formed after γ-irradiation, are bonded to the surface by various energies, which are related to heterogeneity of the carrier surface. (author) 16. Au nanowire junction breakup through surface atom diffusion Science.gov (United States) Vigonski, Simon; Jansson, Ville; Vlassov, Sergei; Polyakov, Boris; Baibuz, Ekaterina; Oras, Sven; Aabloo, Alvo; Djurabekova, Flyura; Zadin, Vahur 2018-01-01 Metallic nanowires are known to break into shorter fragments due to the Rayleigh instability mechanism. This process is strongly accelerated at elevated temperatures and can completely hinder the functioning of nanowire-based devices like e.g. transparent conductive and flexible coatings. At the same time, arranged gold nanodots have important applications in electrochemical sensors. In this paper we perform a series of annealing experiments of gold and silver nanowires and nanowire junctions at fixed temperatures 473, 673, 873 and 973 K (200 °C, 400 °C, 600 °C and 700 °C) during a time period of 10 min. We show that nanowires are especially prone to fragmentation around junctions and crossing points even at comparatively low temperatures. The fragmentation process is highly temperature dependent and the junction region breaks up at a lower temperature than a single nanowire. We develop a gold parametrization for kinetic Monte Carlo simulations and demonstrate the surface diffusion origin of the nanowire junction fragmentation. We show that nanowire fragmentation starts at the junctions with high reliability and propose that aligning nanowires in a regular grid could be used as a technique for fabricating arrays of nanodots. 17. Single OR molecule and OR atomic circuit logic gates interconnected on a Si(100)H surface International Nuclear Information System (INIS) Ample, F; Joachim, C; Duchemin, I; Hliwa, M 2011-01-01 Electron transport calculations were carried out for three terminal OR logic gates constructed either with a single molecule or with a surface dangling bond circuit interconnected on a Si(100)H surface. The corresponding multi-electrode multi-channel scattering matrix (where the central three terminal junction OR gate is the scattering center) was calculated, taking into account the electronic structure of the supporting Si(100)H surface, the metallic interconnection nano-pads, the surface atomic wires and the molecule. Well interconnected, an optimized OR molecule can only run at a maximum of 10 nA output current intensity for a 0.5 V bias voltage. For the same voltage and with no molecule in the circuit, the output current of an OR surface atomic scale circuit can reach 4 μA. 18. Resonant coherent ionization in grazing ion/atom-surface collisions at high velocities Energy Technology Data Exchange (ETDEWEB) Garcia de Abajo, F J [Dept. de Ciencias de la Computacion e Inteligencia Artificial, Facultad de Informatica, Univ. del Pais Vasco, San Sebastian (Spain); Pitarke, J M [Materia Kondentsatuaren Fisika Saila, Zientzi Fakultatea, Euskal Herriko Univ., Bilbo (Spain) 1994-05-01 The resonant coherent interaction of a fast ion/atom with an oriented crystal surface under grazing incidence conditions is shown to contribute significantly to ionize the probe for high enough velocities and motion along a random direction. The dependence of this process on both the distance to the surface and the velocity of the projectile is studied in detail. We focus on the case of hydrogen moving with a velocity above 2 a.u. Comparison with other mechanisms of charge transfer, such as capture from inner shells of the target atoms, permits us to draw some conclusions about the charge state of the outgoing projectiles. (orig.) 19. SASP. Contributions to the 13. Symposium on atomic and surface physics and related topics International Nuclear Information System (INIS) Scheier, P.; Maerk, T. 2002-01-01 The XIII symposium on Atomic and Surface Physics and related Topics (SASP) is devoted to cover the research of interactions between ions, electrons, photons, atoms, molecules and clusters and their interaction with surfaces. This year there was a special session dedicated to proton transfer reaction mass spectrometry covering its applications in different fields and a mini symposium on the radiation action on bio-molecules such as uracil. The contributions included in the proceeding correspond to invited lectures and poster sessions, consisting of short and extended abstracts as well as short articles. (nevyjel) 20. SASP. Contributions to the 13. Symposium on atomic and surface physics and related topics Energy Technology Data Exchange (ETDEWEB) Scheier, P; Maerk, T [eds. 2002-07-01 The XIII symposium on Atomic and Surface Physics and related Topics (SASP) is devoted to cover the research of interactions between ions, electrons, photons, atoms, molecules and clusters and their interaction with surfaces. This year there was a special session dedicated to proton transfer reaction mass spectrometry covering its applications in different fields and a mini symposium on the radiation action on bio-molecules such as uracil. The contributions included in the proceeding correspond to invited lectures and poster sessions, consisting of short and extended abstracts as well as short articles. (nevyjel) 1. Resonant coherent ionization in grazing ion/atom-surface collisions at high velocities International Nuclear Information System (INIS) Garcia de Abajo, F.J.; Pitarke, J.M. 1994-01-01 The resonant coherent interaction of a fast ion/atom with an oriented crystal surface under grazing incidence conditions is shown to contribute significantly to ionize the probe for high enough velocities and motion along a random direction. The dependence of this process on both the distance to the surface and the velocity of the projectile is studied in detail. We focus on the case of hydrogen moving with a velocity above 2 a.u. Comparison with other mechanisms of charge transfer, such as capture from inner shells of the target atoms, permits us to draw some conclusions about the charge state of the outgoing projectiles. (orig.) 2. Interaction of slow and highly charged ions with surfaces: formation of hollow atoms Energy Technology Data Exchange (ETDEWEB) Stolterfoht, N; Grether, M; Spieler, A; Niemann, D [Hahn-Meitner Institut, Berlin (Germany). Bereich Festkoerperphysik; Arnau, A 1997-03-01 The method of Auger spectroscopy was used to study the interaction of highly charged ions with Al and C surfaces. The formation of hollow Ne atoms in the first surface layers was evaluated by means of a Density Functional theory including non-linear screening effects. The time-dependent filling of the hollow atom was determined from a cascade model yielding information about the structure of the K-Auger spectra. Variation of total intensities of the L- and K-Auger peaks were interpreted by the cascade model in terms of attenuation effects on the electrons in the solid. (author) 3. Thermal stability studies on atomically clean and sulphur passivated InGaAs surfaces Energy Technology Data Exchange (ETDEWEB) Chauhan, Lalit; Hughes, Greg [School of Physical Sciences, Dublin City University, Glasnevin, Dublin 9 (Ireland) 2013-03-15 High resolution synchrotron radiation core level photoemission measurements have been used to study the high temperature stability of sulphur passivated InGaAs surfaces and comparisons made with atomically clean surfaces subjected to the same annealing temperatures. Sulphur passivation of clean InGaAs surfaces prepared by the thermal removal of an arsenic capping layer was carried out using an in situ molecular sulphur treatment in ultra high vacuum. The elemental composition of the surfaces of these materials was measured at a series of annealing temperatures up to 530 C. Following a 480 C anneal In:Ga ratio was found to have dropped by 33% on sulphur passivated surface indicating a significant loss of indium, while no drop in indium signal was recorded at this temperature on the atomically InGaAs surface. No significant change in the As surface concentration was measured at this temperature. These results reflect the reduced thermal stability of the sulphur passivated InGaAs compared to the atomically clean surface which has implications for device fabrication. (Copyright copyright 2013 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) 4. Microscopic modeling of gas-surface scattering: II. Application to argon atom adsorption on a platinum (111) surface Science.gov (United States) Filinov, A.; Bonitz, M.; Loffhagen, D. 2018-06-01 A new combination of first principle molecular dynamics (MD) simulations with a rate equation model presented in the preceding paper (paper I) is applied to analyze in detail the scattering of argon atoms from a platinum (111) surface. The combined model is based on a classification of all atom trajectories according to their energies into trapped, quasi-trapped and scattering states. The number of particles in each of the three classes obeys coupled rate equations. The coefficients in the rate equations are the transition probabilities between these states which are obtained from MD simulations. While these rates are generally time-dependent, after a characteristic time scale t E of several tens of picoseconds they become stationary allowing for a rather simple analysis. Here, we investigate this time scale by analyzing in detail the temporal evolution of the energy distribution functions of the adsorbate atoms. We separately study the energy loss distribution function of the atoms and the distribution function of in-plane and perpendicular energy components. Further, we compute the sticking probability of argon atoms as a function of incident energy, angle and lattice temperature. Our model is important for plasma-surface modeling as it allows to extend accurate simulations to longer time scales. 5. Long Alkyl Chain Organophosphorus Coupling Agents for in Situ Surface Functionalization by Reactive Milling Directory of Open Access Journals (Sweden) Annika Betke 2014-08-01 Full Text Available Innovative synthetic approaches should be simple and environmentally friendly. Here, we present the surface modification of inorganic submicrometer particles with long alkyl chain organophosphorus coupling agents without the need of a solvent, which makes the technique environmentally friendly. In addition, it is of great benefit to realize two goals in one step: size reduction and, simultaneously, surface functionalization. A top-down approach for the synthesis of metal oxide particles with in situ surface functionalization is used to modify titania with long alkyl chain organophosphorus coupling agents. A high energy planetary ball mill was used to perform reactive milling using titania as inorganic pigment and long alkyl chain organophosphorus coupling agents like dodecyl and octadecyl phosphonic acid. The final products were characterized by IR, NMR and X-ray fluorescence spectroscopy, thermal and elemental analysis as well as by X-ray powder diffraction and scanning electron microscopy. The process entailed a tribochemical phase transformation from the starting material anatase to a high-pressure modification of titania and the thermodynamically more stable rutile depending on the process parameters. Furthermore, the particles show sizes between 100 nm and 300 nm and a degree of surface coverage up to 0.8 mmol phosphonate per gram. 6. Damage at a tungsten surface induced by impacts of self-atoms Energy Technology Data Exchange (ETDEWEB) Wu, Yong [Data Center for High Energy Density Physics, Institute of Applied Physics and, Computational Mathematics, P. O. Box 8009, Beijing 100088 (China); Krstic, Predrag, E-mail: [email protected] [Institute for Advanced Computational Science, Stony Brook University, Stony Brook, NY 11794-5250 (United States); Zhou, Fu Yang [College of Material Sciences and Optoelectronic Technology, University of the Chinese Academy of Sciences, P. O. Box 4588, Beijing 100049 (China); Meyer, Fred [Physics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831-6372 (United States) 2015-12-15 We study evolution of the surface defects of a 300 K tungsten surface due to the cumulative impact of 0.25–10 keV self-atoms. The simulation is performed by molecular dynamics with bond-order Tersoff-form potentials. At all studied impact energies the computation shows strong defect-recombination effect of both created Frenkel pairs as well as recombination of the implanted atoms with the vacancies created by the sputtering. This leads to a saturation of the cumulative count of vacancies, evident at energies below 2 keV, as long as the implantation per impact atom exceeds sputtering and to a saturation of the interstitial count when production of the sputtered particles per impact atom becomes larger than 1 (in the energy range 2-4 keV). The number of cumulative defects is fitted as functions of impact fluence and energy, enabling their analytical extrapolation outside the studied range of parameters. - Highlights: • We calculated cumulative creation of defects in tungsten by self-atom impact. • At some energies, the defect count saturate with increasing damage dose. • The defects are accumulated in the first few layers of the tungsten surface. • The interstitials are formed predominantly as adatoms. 7. Theory of phonon inelastic atom--surface scattering. I. Quantum mechanical treatment of collision dynamics International Nuclear Information System (INIS) Choi, B.H.; Poe, R.T. 1985-01-01 We present a systematic formulation of the atom--surface scattering dynamics which includes the vibrational states of the atoms in the solid (phonons). The properties of the total scattering wave function of the system, a representation of the interaction potential matrix, and the characteristics of the independent physical solutions are all derived from the translational invariance of the full Hamiltonian. The scattering equations in the integral forms as well as the related Green functions were also obtained. The configurational representations of the Green functions, in particular, are quite different from those of the conventional scattering theory where the collision partners are spatially localized. Various versions of the integral expression of scattering, transition, and reactance matrices were also obtained. They are useful for introducing approximation schemes. From the present formulation, some specific theoretical schemes which are more realistic compared to those that have been employed so far and at the same time capable of yielding effective ab initio computation are derived in the following paper. The time reversal invariance and the microscopic reversibility of the atom--surface scattering were discussed. The relations between the in and outgoing scattering wave functions which are satisfied in the atom--surface system and important in the transition matrix methods were presented. The phonon annihilation and creation, and the adsorption and desorption of the atom are related through the time reversal invariance, and thus the microscopic reversibility can be tested by the experiment 8. Atomic-layer-resolved analysis of surface magnetism by diffraction spectroscopy International Nuclear Information System (INIS) Matsui, Fumihiko; Matsushita, Tomohiro; Daimon, Hiroshi 2010-01-01 X-ray absorption near edge structure (XANES) and X-ray magnetic circular dichroism (XMCD) measurements by Auger-electron-yield detection are powerful analysis tools for the electronic and magnetic structures of surfaces, but all the information from atoms within the electron mean-free-path range is summed into the obtained spectrum. In order to investigate the electronic and magnetic structures of each atomic layer at subsurface, we have proposed a new method, diffraction spectroscopy, which is the combination of X-ray absorption spectroscopy and Auger electron diffraction (AED). From a series of measured thickness dependent AED patterns, we deduced a set of atomic-layer-specific AED patterns arithmetically. Based on these AED patterns, we succeeded in disentangling obtained XANES and XMCD spectra into those from different atomic layers. 9. Raman-atomic force microscopy of the ommatidial surfaces of Dipteran compound eyes Science.gov (United States) Anderson, Mark S.; Gaimari, Stephen D. 2003-01-01 The ommatidial lens surfaces of the compound eyes in several species of files (Insecta: Diptera) and a related order (Mecoptera) were analyzed using a recently developed Raman-atomic force microscope. We demonstrate in this work that the atomic force microscope (AFM) is a potentially useful instrument for gathering phylogenetic data and that the newly developed Raman-AFM may extend this application by revealing nanometer-scale surface chemistry. This is the first demonstration of apertureless near-field Raman spectroscopy on an intact biological surface. For Chrysopilus testaceipes Bigot (Rhagionidae), this reveals unique cerebral cortex-like surface ridges with periodic variation in height and surface chemistry. Most other Brachyceran flies, and the "Nematoceran" Sylvicola fenestralis (Scopoli) (Anisopodidae), displayed the same morphology, while other taxa displayed various other characteristics, such as a nodule-like (Tipula (Triplicitipula) sp. (Tipulidae)) or coalescing nodule-like (Tabanus punctifer Osten Sacken (Tabanidae)) morphology, a smooth morphology with distinct pits and grooves (Dilophus orbatus (Say) (Bibionidae)), or an entirely smooth surface (Bittacus chlorostigma MacLachlan (Mecoptera: Bittacidae)). The variation in submicrometer structure and surface chemistry provides a new information source of potential phylogenetic importance, suggesting the Raman-atomic force microscope could provide a new tool useful to systematic and evolutionary inquiry. 10. Evaluation of the roughness of the surface of porcelain systems with the atomic force microscope International Nuclear Information System (INIS) Chavarria Rodriguez, Bernal 2013-01-01 The surface of a dental ceramic was evaluated and compared with an atomic force microscope after being treated with different systems of polishing. 14 identical ceramic Lava® Zirconia discs were used to test the different polishing systems. 3 polishing systems from different matrix houses were used to polish dental porcelain. The samples were evaluated quantitatively with an atomic force microscope in order to study the real effectiveness of each system, on the roughness average (Ra) and the maximum peak to valley roughness (Ry) of the ceramic surfaces. A considerable reduction of the surface roughness was obtained by applying different polishing systems on the surface of dental ceramics. Very reliable values of Ra and Ry were obtained by making measurements on the structure reproduced by the atomic force microscope. The advanced ceramics of zirconium oxide presented the best physical characteristics and low levels of surface roughness. A smoother surface was achieved with the application of polishing systems, thus demonstrating the reduction of the surface roughness of a dental ceramic [es 11. Evolution of atomic-scale surface structures during ion bombardment: A fractal simulation International Nuclear Information System (INIS) Shaheen, M.A.; Ruzic, D.N. 1993-01-01 Surfaces of interest in microelectronics have been shown to exhibit fractal topographies on the atomic scale. A model utilizing self-similar fractals to simulate surface roughness has been added to the ion bombardment code TRIM. The model has successfully predicted experimental sputtering yields of low energy (less then 1000 eV) Ar on Si and D on C using experimentally determined fractal dimensions. Under ion bombardment the fractal surface structures evolve as the atoms in the collision cascade are displaced or sputtered. These atoms have been tracked and the evolution of the surface in steps of one monolayer of flux has been determined. The Ar--Si system has been studied for incidence energies of 100 and 500 eV, and incidence angles of 0 degree, 30 degree, and 60 degree. As expected, normally incident ion bombardment tends to reduce the roughness of the surface, whereas large angle ion bombardment increases the degree of surface roughness. Of particular interest though, the surfaces are still locally self-similar fractals after ion bombardment and a steady state fractal dimension is reached, except at large angles of incidence 12. Adsorption of flexible polymer chains on a surface: Effects of different solvent conditions Science.gov (United States) Martins, P. H. L.; Plascak, J. A.; Bachmann, M. 2018-05-01 Polymer chains undergoing a continuous adsorption-desorption transition are studied through extensive computer simulations. A three-dimensional self-avoiding walk lattice model of a polymer chain grafted onto a surface has been treated for different solvent conditions. We have used an advanced contact-density chain-growth algorithm, in which the density of contacts can be directly obtained. From this quantity, the order parameter and its fourth-order Binder cumulant are computed, as well as the corresponding critical exponents and the adsorption-desorption transition temperature. As the number of configurations with a given number of surface contacts and monomer-monomer contacts is independent of the temperature and solvent conditions, it can be easily applied to get results for different solvent parameter values without the need of any extra simulations. In analogy to continuous magnetic phase transitions, finite-size-scaling methods have been employed. Quite good results for the critical properties and phase diagram of very long single polymer chains have been obtained by properly taking into account the effects of corrections to scaling. The study covers all solvent effects, going from the limit of super-self-avoiding walks, characterized by effective monomer-monomer repulsion, to poor solvent conditions that enable the formation of compact polymer structures. 13. Track sensitivity and the surface roughness measurements of CR-39 with atomic force microscope CERN Document Server Yasuda, N; Amemiya, K; Takahashi, H; Kyan, A; Ogura, K 1999-01-01 Atomic Force Microscope (AFM) has been applied to evaluate the surface roughness and the track sensitivity of CR-39 track detector. We experimentally confirmed the inverse correlation between the track sensitivity and the roughness of the detector surface after etching. The surface of CR-39 (CR-39 doped with antioxidant (HARZLAS (TD-1)) and copolymer of CR-39/NIPAAm (TNF-1)) with high sensitivity becomes rough by the etching, while the pure CR-39 (BARYOTRAK) with low sensitivity keeps its original surface clarity even for the long etching. 14. Improvement and protection of niobium surface superconductivity by atomic layer deposition and heat treatment Energy Technology Data Exchange (ETDEWEB) Proslier, T.; /IIT, Chicago /Argonne; Zasadzinski, J.; /IIT, Chicago; Moore, J.; Pellin, M.; Elam, J.; /Argonne; Cooley, L.; /Fermilab; Antoine, C.; /Saclay 2008-11-01 A method to treat the surface of Nb is described, which potentially can improve the performance of superconducting rf cavities. We present tunneling and x-ray photoemission spectroscopy measurements at the surface of cavity-grade niobium samples coated with a 3 nm alumina overlayer deposited by atomic layer deposition. The coated samples baked in ultrahigh vacuum at low temperature degraded superconducting surface. However, at temperatures above 450 C, the tunneling conductance curves show significant improvements in the superconducting density of states compared with untreated surfaces. 15. Surface features on Sahara soil dust particles made visible by atomic force microscope (AFM phase images Directory of Open Access Journals (Sweden) M. O. Andreae 2008-10-01 Full Text Available We show that atomic force microscopy (AFM phase images can reveal surface features of soil dust particles, which are not evident using other microscopic methods. The non-contact AFM method is able to resolve topographical structures in the nanometer range as well as to uncover repulsive atomic forces and attractive van der Waals' forces, and thus gives insight to surface properties. Though the method does not allow quantitative assignment in terms of chemical compound description, it clearly shows deposits of distinguishable material on the surface. We apply this technique to dust aerosol particles from the Sahara collected over the Atlantic Ocean and describe micro-features on the surfaces of such particles. 16. Interatomic potentials from rainbow scattering of keV noble gas atoms under axial surface channeling International Nuclear Information System (INIS) Schueller, A.; Wethekam, S.; Mertens, A.; Maass, K.; Winter, H.; Gaertner, K. 2005-01-01 For grazing scattering of keV Ne and Ar atoms from a Ag(1 1 1) and a Cu(1 1 1) surface under axial surface channeling conditions we observe well defined peaks in the angular distributions for scattered projectiles. These peaks can be attributed to 'rainbow-scattering' and are closely related to the geometry of potential energy surfaces which can be approximated by the superposition of continuum potentials along strings of atoms in the surface plane. The dependence of rainbow angles on the scattering geometry provides stringent tests on the scattering potentials. From classical trajectory calculations based on universal (ZBL), adjusted Moliere (O'Connor and Biersack), and individual interatomic potentials we obtain corresponding rainbow angles for comparison with the experimental data. We find good overall agreement with the experiments for a description of trajectories based on adjusted Moliere and individual potentials, whereas the agreement is poorer for potentials with ZBL screening 17. Theoretical atomic-force-microscopy study of a stepped surface: Nonlocal effects in the probe International Nuclear Information System (INIS) Girard, C. 1991-01-01 The interaction force between a metallic tip and a nonplanar dielectric surface is derived from a nonlocal formalism. A general formulation is given for the case of a spherical tip of nanometer size and for surfaces of arbitrary shapes (stepped surfaces and single crystals adsorbed on a planar surface). The dispersion part of the attractive force is obtained from a nonlocal theory expressed in terms of generalized electric susceptibilities of the two constituents. Implications for atomic force microscopy in attractive modes are discussed. In this context, the present model indicates two different forms of corrugation: those due to the protuberance present on the tip leading to atomic corrugations; nanometer-sized corrugations detected in the attractive region by the spherical part of the tip 18. Escherichia coli surface display of single-chain antibody VRC01 against HIV-1 infection International Nuclear Information System (INIS) Wang, Lin-Xu; Mellon, Michael; Bowder, Dane; Quinn, Meghan; Shea, Danielle; Wood, Charles; Xiang, Shi-Hua 2015-01-01 Human immunodeficiency virus type 1 (HIV-1) transmission and infection occur mainly via the mucosal surfaces. The commensal bacteria residing in these surfaces can potentially be employed as a vehicle for delivering inhibitors to prevent HIV-1 infection. In this study, we have employed a bacteria-based strategy to display a broadly neutralizing antibody VRC01, which could potentially be used to prevent HIV-1 infection. The VRC01 antibody mimics CD4-binding to gp120 and has broadly neutralization activities against HIV-1. We have designed a construct that can express the fusion peptide of the scFv-VRC01 antibody together with the autotransporter β-barrel domain of IgAP gene from Neisseria gonorrhoeae, which enabled surface display of the antibody molecule. Our results indicate that the scFv-VRC01 antibody molecule was displayed on the surface of the bacteria as demonstrated by flow cytometry and immunofluorescence microscopy. The engineered bacteria can capture HIV-1 particles via surface-binding and inhibit HIV-1 infection in cell culture. - Highlights: • Designed single-chain VRC01 antibody was demonstrated to bind HIV-1 envelope gp120. • Single-chain VRC01 antibody was successfully displayed on the surface of E. coli. • Engineered bacteria can absorb HIV-1 particles and prevent HIV-1 infection in cell culture 19. Atomic interactions at the (100) diamond surface and the impact of surface and interface changes on the electronic transport properties Science.gov (United States) Deferme, Wim Centuries and centuries already, diamond is a material that speaks to ones imagination. Till the 18th century it was only mined in India, after it was also found in Brazil and South-Africa. But along the fascinating properties of diamond, it is also a very interesting material for industry. After the discovery at the end of the 18th century that diamond consists of carbon, it took until the 50's of the previous century before research groups from Russia, Japan and the USA were able to reproduce the growth process of diamond. In 1989 it was discovered that the surface of intrinsic, insulation diamond can be made conductive by hydrogenating the surface. It was clear that not only hydrogen at the surface but also the so called "adsorbates" were responsible for this conductivity. It was still not completely clear what was the influence of other species (like oxygen) on the mechanism of surface conductivity and therefore in this thesis the influence of oxygen on the electronic transport properties of atomically flat diamond are researched. Besides the growth of atomically flat diamond with the use of CVD (chemical vapour deposition) en the study of the grown surfaces with characterising techniques such as AFM (atomic force microscopy) and STM (scanning tunnelling microscopy), the study of the surface treatment with plasma techniques is the main topic of this thesis. The influence of oxygen on the surface conductivity is studied and with the ToF (Time-of-Flight) technique the transport properties of the freestanding diamond are examined. With a short laserflash, electrons and holes are created at the diamond/aluminium interface and due to an electric field (up to 500V) the charge carriers are translated to the back contact. In this way the influence of the surface and the changes at the aluminum contacts is studied leading to very interesting results. 20. Long Chain N-acyl Homoserine Lactone Production by Enterobacter sp. Isolated from Human Tongue Surfaces Science.gov (United States) Yin, Wai-Fong; Purmal, Kathiravan; Chin, Shenyang; Chan, Xin-Yue; Chan, Kok-Gan 2012-01-01 We report the isolation of N-acyl homoserine lactone-producing Enterobacter sp. isolate T1-1 from the posterior dorsal surfaces of the tongue of a healthy individual. Spent supernatants extract from Enterobacter sp. isolate T1-1 activated the biosensor Agrobacterium tumefaciens NTL4(pZLR4), suggesting production of long chain AHLs by these isolates. High resolution mass spectrometry analysis of these extracts confirmed that Enterobacter sp. isolate T1-1 produced a long chain N-acyl homoserine lactone, namely N-dodecanoyl-homoserine lactone (C12-HSL). To the best of our knowledge, this is the first isolation of Enterobacter sp., strain T1-1 from the posterior dorsal surface of the human tongue and N-acyl homoserine lactones production by this bacterium. PMID:23202161 1. Ab initio electronic structure calculations for Mn linear chains deposited on CuN/Cu(001) surfaces International Nuclear Information System (INIS) Barral, Maria Andrea; Weht, Ruben; Lozano, Gustavo; Maria Llois, Ana 2007-01-01 In a recent experiment, scanning tunneling microscopy has been used to obtain a direct probe of the magnetic interaction in linear manganese chains arranged by atomic manipulation on thin insulating copper nitride islands grown on Cu(001). The local spin excitation spectra of these chains have been measured with inelastic electron tunneling spectroscopy. Analyzing the spectroscopic results with a Heisenberg Hamiltonian the interatomic coupling strength within the chains has been obtained. It has been found that the coupling strength depends on the deposition sites of the Mn atoms on the islands. In this contribution, we perform ab initio calculations for different arrangements of infinite Mn chains on CuN in order to understand the influence of the environment on the value of the magnetic interactions 2. Quantum theory of atom-surface scattering: exact solutions and evaluation of approximations International Nuclear Information System (INIS) Chiroli, C.; Levi, A.C. 1976-01-01 In a recent article a hard corrugated surface was proposed as a simple model for atom-surface scattering. The problem was not solved exactly, however, but several alternative approximations were considered. Since these three similar, but inequivalent, approximations were proposed, the problem arose to evaluate these approximations in order to choose between them. In the present letter some exact calculations are presented which make this choice rationally possible. (Auth.) 3. Reversible electrochemical modification of the surface of a semiconductor by an atomic-force microscope probe Energy Technology Data Exchange (ETDEWEB) Kozhukhov, A. S., E-mail: [email protected]; Sheglov, D. V.; Latyshev, A. V. [Russian Academy of Sciences, Rzhanov Institute of Semiconductor Physics, Siberian Branch (Russian Federation) 2017-04-15 A technique for reversible surface modification with an atomic-force-microscope (AFM) probe is suggested. In this method, no significant mechanical or topographic changes occur upon a local variation in the surface potential of a sample under the AFM probe. The method allows a controlled relative change in the ohmic resistance of a channel in a Hall bridge within the range 20–25%. 4. Behaviour of oxygen atoms near the surface of nanostructured Nb2O5 International Nuclear Information System (INIS) Cvelbar, U; Mozetic, M 2007-01-01 Recombination of neutral oxygen atoms on oxidized niobium foil was studied. Three sets of samples have been prepared: a set of niobium foils with a film of polycrystalline niobium oxide with a thickness of 40 nm, another one with a film thickness of about 2 μm and a set of foils covered with dense bundles of single-crystal Nb 2 O 3 nanowires. All the samples were prepared by oxidation of a pure niobium foil. The samples with a thin oxide film were prepared by exposure of as-received foils to a flux of O-atoms, the samples with a thick polycrystalline niobium oxide were prepared by baking the foils in air at a temperature of 800 deg. C, while the samples covered with nanowires were prepared by oxidation in a highly reactive oxygen plasma. The samples were exposed to neutral oxygen atoms from a remote oxygen plasma source. Depending on discharge parameters, the O-atom density in the postglow chamber, as measured with a catalytic probe, was between 5 x 10 20 and 8 x 10 21 m -3 . The O-atom density in the chamber without the samples was found rather independent of the probe position. The presence of the samples caused a decrease in the O-atom density. Depending on the distance from the samples, the O-atom density was decreased up to 5 times. The O-atom density also depended on the surface morphology of the samples. The strongest decrease in the O-atom density was observed with the samples covered with dense bundles of nanowires. The results clearly showed that niobium oxide nanowires exhibit excellent catalytic behaviour for neutral radicals and can be used as catalysts of exhaust radicals found in many applications 5. Hydrophilization of Poly(ether ether ketone) Films by Surface-initiated Atom Transfer Radical Polymerization DEFF Research Database (Denmark) Fristrup, Charlotte Juel; Eskimergen, Rüya; Burkrinsky, J.T. 2008-01-01 and confirmed by ATR FTIR, water contact ang;le, and Thermal Gravimetric Analysis (TGA). The surface topography was evaluated by "Atomic Force Microscopy (AFM). X-ray Photoelectron Spectroscopy (XPS) has been used to investigate the degree of functionalization. The performed modification allowed for successful... 6. On Surface-Initiated Atom Transfer Radical Polymerization Using Diazonium Chemistry To Introduce the Initiator Layer DEFF Research Database (Denmark) Iruthayaraj, Joseph; Chernyy, Sergey; Lillethorup, Mie 2011-01-01 This work features the controllability of surface-initiated atom transfer radical polymerization (SI-ATRP) of methyl methacrylate, initiated by a multilayered 2-bromoisobutyryl moiety formed via diazonium chemistry. The thickness as a function of polymerization time has been studied by varying di... 7. Surface topography characterization using an atomic force microscope mounted on a coordinate measuring machine DEFF Research Database (Denmark) De Chiffre, Leonardo; Hansen, H.N; Kofod, N 1999-01-01 The paper describes the construction, testing and use of an integrated system for topographic characterization of fine surfaces on parts having relatively big dimensions. An atomic force microscope (AFM) was mounted on a manual three-coordinate measuring machine (CMM) achieving free positioning o... 8. Energy exchange in thermal energy atom-surface scattering: impulsive models International Nuclear Information System (INIS) Barker, J.A.; Auerbach, D.J. 1979-01-01 Energy exchange in thermal energy atom surface collisions is studied using impulsive ('hard cube' and 'hard sphere') models. Both models reproduce the observed nearly linear relation between outgoing and incoming energies. In addition, the hard-sphere model accounts for the widths of the outcoming energy distributions. (Auth.) 9. PEGylation on mixed monolayer gold nanoparticles: Effect of grafting density, chain length, and surface curvature. Science.gov (United States) Lin, Jiaqi; Zhang, Heng; Morovati, Vahid; Dargazany, Roozbeh 2017-10-15 PEGylation on nanoparticles (NPs) is widely used to prevent aggregation and to mask NPs from the fast clearance system in the body. Understanding the molecular details of the PEG layer could facilitate rational design of PEGylated NPs that maximize their solubility and stealth ability without significantly compromising the targeting efficiency and cellular uptake. Here, we use molecular dynamics (MD) simulation to understand the structural and dynamic the PEG coating of mixed monolayer gold NPs. Specifically, we modeled gold NPs with PEG grafting densities ranging from 0-2.76chain/nm 2 , chain length with 0-10 PEG monomers, NP core diameter from 5nm to 500nm. It is found that the area accessed by individual PEG chains gradually transits from a "mushroom" to a "brush" conformation as NP surface curvature become flatter, whereas such a transition is not evident on small NPs when grafting density increases. It is shown that moderate grafting density (∼1.0chain/nm 2 ) and short chain length are sufficient enough to prevent NPs from aggregating in an aqueous medium. The effect of grafting density on solubility is also validated by dynamic light scattering measurements of PEGylated 5nm gold NPs. With respect to the shielding ability, simulations predict that increase either grafting density, chain length, or NP diameter will reduce the accessibility of the protected content to a certain size molecule. Interestingly, reducing NP surface curvature is estimated to be most effective in promoting shielding ability. For shielding against small molecules, increasing PEG grafting density is more effective than increasing chain length. A simple model that includes these three investigated parameters is developed based on the simulations to roughly estimate the shielding ability of the PEG layer with respect to molecules of different sizes. The findings can help expand our current understanding of the PEG layer and guide rational design of PEGylated gold NPs for a particular 10. Adsorption and migration of single metal atoms on the calcite (10.4) surface International Nuclear Information System (INIS) Pinto, H; Haapasilta, V; Lokhandwala, M; Foster, Adam S; Öberg, S 2017-01-01 Transition metal atoms are one of the key ingredients in the formation of functional 2D metal organic coordination networks. Additionally, the co-deposition of metal atoms can play an important role in anchoring the molecular structures to the surface at room temperature. To gain control of such processes requires the understanding of adsorption and diffusion properties of the different transition metals on the target surface. Here, we used density functional theory to investigate the adsorption of 3 d (Ti, Cr, Fe, Ni, Cu), 4 d (Zr, Nb, Mo, Pd, Ag) and 5 d (Hf, W, Ir, Pt, Au) transition metal adatoms on the insulating calcite (10.4) surface. We identified the most stable adsorption sites and calculated binding energies and corresponding ground state structures. We find that the preferential adsorption sites are the Ca–Ca bridge sites. Apart from the Cr, Mo, Cu, Ag and Au all the studied metals bind strongly to the calcite surface. The calculated migration barriers for the representative Ag and Fe atoms indicates that the metal adatoms are mobile on the calcite surface at room temperature. Bader analysis suggests that there is no significant charge transfer between the metal adatoms and the calcite surface. (paper) 11. Atomic force microscopy characterization of the surface wettability of natural fibres International Nuclear Information System (INIS) Pietak, Alexis; Korte, Sandra; Tan, Emelyn; Downard, Alison; Staiger, Mark P. 2007-01-01 Natural fibres represent a readily available source of ecologically friendly and inexpensive reinforcement in composites with degradable thermoplastics, however chemical treatments of fibres are required to prepare feasible composites. It is desirable to characterize the surface wettability of fibres after chemical treatment as the polarity of cellulose-based fibres influences compatibility with a polymer matrix. Assessment of the surface wettability of natural fibres using conventional methods presents a challenge as the surfaces are morphologically and chemically heterogeneous, rough, and can be strongly wicking. In this work it is shown that under atmospheric conditions the adhesion force between an atomic force microscopy (AFM) tip and the fibre surface can estimate the water contact angle and surface wettability of the fibre. AFM adhesion force measurements are suitable for the more difficult surfaces of natural fibres and in addition allow for correlations between microstructural features and surface wettability characteristics 12. Mean-field theory of photoinduced formation of surface reliefs in side-chain azobenzene polymers DEFF Research Database (Denmark) Pedersen, Thomas Garm; Johansen, Per Michael; Holme, N.C.R. 1998-01-01 A mean-field model of photoinduced surface reliefs in dye containing side-chain polymers is presented. It is demonstrated that photoinduced ordering of dye molecules subject to anisotropic intermolecular interactions leads to mass transport even when the intensity of the incident light is spatially...... uniform. Theoretical profiles are obtained using a simple variational method and excellent agreement with experimental surface reliefs recorded under various polarization configurations is found. The polarization dependence of both period and shape of the profiles is correctly reproduced by the model.... 13. Quantitative measurements of ground state atomic oxygen in atmospheric pressure surface micro-discharge array Science.gov (United States) Li, D.; Kong, M. G.; Britun, N.; Snyders, R.; Leys, C.; Nikiforov, A. 2017-06-01 The generation of atomic oxygen in an array of surface micro-discharge, working in atmospheric pressure He/O2 or Ar/O2 mixtures, is investigated. The absolute atomic oxygen density and its temporal and spatial dynamics are studied by means of two-photon absorption laser-induced fluorescence. A high density of atomic oxygen is detected in the He/O2 mixture with up to 10% O2 content in the feed gas, whereas the atomic oxygen concentration in the Ar/O2 mixture stays below the detection limit of 1013 cm-3. The measured O density near the electrode under the optimal conditions in He/1.75% O2 gas is 4.26  ×  1015 cm-3. The existence of the ground state O (2p 4 3 P) species has been proven in the discharge at a distance up to 12 mm away from the electrodes. Dissociative reactions of the singlet O2 with O3 and deep vacuum ultraviolet radiation, including the radiation of excimer \\text{He}2\\ast , are proposed to be responsible for O (2p 4 3 P) production in the far afterglow. A capability of the surface micro-discharge array delivering atomic oxygen to long distances over a large area is considered very interesting for various biomedical applications. 14. Engineering the Eigenstates of Coupled Spin-1/2 Atoms on a Surface. Science.gov (United States) Yang, Kai; Bae, Yujeong; Paul, William; Natterer, Fabian D; Willke, Philip; Lado, Jose L; Ferrón, Alejandro; Choi, Taeyoung; Fernández-Rossier, Joaquín; Heinrich, Andreas J; Lutz, Christopher P 2017-12-01 Quantum spin networks having engineered geometries and interactions are eagerly pursued for quantum simulation and access to emergent quantum phenomena such as spin liquids. Spin-1/2 centers are particularly desirable, because they readily manifest coherent quantum fluctuations. Here we introduce a controllable spin-1/2 architecture consisting of titanium atoms on a magnesium oxide surface. We tailor the spin interactions by atomic-precision positioning using a scanning tunneling microscope (STM) and subsequently perform electron spin resonance on individual atoms to drive transitions into and out of quantum eigenstates of the coupled-spin system. Interactions between the atoms are mapped over a range of distances extending from highly anisotropic dipole coupling to strong exchange coupling. The local magnetic field of the magnetic STM tip serves to precisely tune the superposition states of a pair of spins. The precise control of the spin-spin interactions and ability to probe the states of the coupled-spin network by addressing individual spins will enable the exploration of quantum many-body systems based on networks of spin-1/2 atoms on surfaces. 15. Revisiting the inelastic electron tunneling spectroscopy of single hydrogen atom adsorbed on the Cu(100) surface International Nuclear Information System (INIS) Jiang, Zhuoling; Wang, Hao; Sanvito, Stefano; Hou, Shimin 2015-01-01 Inelastic electron tunneling spectroscopy (IETS) of a single hydrogen atom on the Cu(100) surface in a scanning tunneling microscopy (STM) configuration has been investigated by employing the non-equilibrium Green’s function formalism combined with density functional theory. The electron-vibration interaction is treated at the level of lowest order expansion. Our calculations show that the single peak observed in the previous STM-IETS experiments is dominated by the perpendicular mode of the adsorbed H atom, while the parallel one only makes a negligible contribution even when the STM tip is laterally displaced from the top position of the H atom. This propensity of the IETS is deeply rooted in the symmetry of the vibrational modes and the characteristics of the conduction channel of the Cu-H-Cu tunneling junction, which is mainly composed of the 4s and 4p z atomic orbitals of the Cu apex atom and the 1s orbital of the adsorbed H atom. These findings are helpful for deepening our understanding of the propensity rules for IETS and promoting IETS as a more popular spectroscopic tool for molecular devices 16. Preservation of atomically clean silicon surfaces in air by contact bonding DEFF Research Database (Denmark) Grey, Francois; Ljungberg, Karin 1997-01-01 When two hydrogen-passivated silicon surfaces are placed in contact under cleanroom conditions, a weak bond is formed. Cleaving this bond under ultrahigh vacuum (UHV) conditions, and observing the surfaces with low energy electron diffraction and scanning tunneling microscopy, we find that the or...... reconstruction from oxidation in air, Contact bonding opens the way to novel applications of reconstructed semiconductor surfaces, by preserving their atomic structure intact outside of a UHV chamber. (C) 1997 American Institute of Physics.......When two hydrogen-passivated silicon surfaces are placed in contact under cleanroom conditions, a weak bond is formed. Cleaving this bond under ultrahigh vacuum (UHV) conditions, and observing the surfaces with low energy electron diffraction and scanning tunneling microscopy, we find...... that the ordered atomic structure of the surfaces is protected from oxidation, even after the bonded samples have been in air for weeks. Further, we show that silicon surfaces that have been cleaned and hydrogen-passivated in UHV can be contacted in UHV in a similarly hermetic fashion, protecting the surface... 17. Surface Phenomena During Plasma-Assisted Atomic Layer Etching of SiO2. Science.gov (United States) Gasvoda, Ryan J; van de Steeg, Alex W; Bhowmick, Ranadeep; Hudson, Eric A; Agarwal, Sumit 2017-09-13 Surface phenomena during atomic layer etching (ALE) of SiO 2 were studied during sequential half-cycles of plasma-assisted fluorocarbon (CF x ) film deposition and Ar plasma activation of the CF x film using in situ surface infrared spectroscopy and ellipsometry. Infrared spectra of the surface after the CF x deposition half-cycle from a C 4 F 8 /Ar plasma show that an atomically thin mixing layer is formed between the deposited CF x layer and the underlying SiO 2 film. Etching during the Ar plasma cycle is activated by Ar + bombardment of the CF x layer, which results in the simultaneous removal of surface CF x and the underlying SiO 2 film. The interfacial mixing layer in ALE is atomically thin due to the low ion energy during CF x deposition, which combined with an ultrathin CF x layer ensures an etch rate of a few monolayers per cycle. In situ ellipsometry shows that for a ∼4 Å thick CF x film, ∼3-4 Å of SiO 2 was etched per cycle. However, during the Ar plasma half-cycle, etching proceeds beyond complete removal of the surface CF x layer as F-containing radicals are slowly released into the plasma from the reactor walls. Buildup of CF x on reactor walls leads to a gradual increase in the etch per cycle. 18. Magnetic Interaction between Surface-Engineered Rare-Earth Atomic Spins Directory of Open Access Journals (Sweden) Chiung-Yuan Lin 2012-06-01 Full Text Available We report the ab-initio study of rare-earth adatoms (Gd on an insulating surface. This surface is of interest because of previous studies by scanning tunneling microscopy showing spin excitations of transition-metal adatoms. The present work is the first study of rare-earth spin-coupled adatoms, as well as the geometry effect of spin coupling and the underlying mechanism of ferromagnetic coupling. The exchange coupling between Gd atoms on the surface is calculated to be antiferromagnetic in a linear geometry and ferromagnetic in a diagonal geometry. We also find that the Gd dimers in these two geometries are similar to the nearest-neighbor and the next-nearest-neighbor Gd atoms in GdN bulk. We analyze how much direct exchange, superexchange, and Ruderman-Kittel-Kasuya-Yosida interactions contribute to the exchange coupling for both geometries by additional first-principles calculations of related model systems. 19. Formation of nanostructures on HOPG surface in presence of surfactant atom during low energy ion irradiation Energy Technology Data Exchange (ETDEWEB) Ranjan, M., E-mail: [email protected]; Joshi, P.; Mukherjee, S. 2016-07-15 Low energy ions beam often develop periodic patterns on surfaces under normal or off-normal incidence. Formation of such periodic patterns depends on the substrate material, the ion beam parameters, and the processing conditions. Processing conditions introduce unwanted contaminant atoms, which also play strong role in pattern formation by changing the effective sputtering yield of the material. In this work we have analysed the effect of Cu, Fe and Al impurities introduced during low energy Ar{sup +} ion irradiation on HOPG substrate. It is observed that by changing the species of foreign atoms the surface topography changes drastically. The observed surface topography is co-related with the modified sputtering yield of HOPG. Presence of Cu and Fe amplify the effective sputtering yield of HOPG, so that the required threshold for the pattern formation is achieved with the given fluence, whereas Al does not lead to any significant change in the effective yield and hence no pattern formation occurs. 20. Apparatus and method for atmospheric pressure reactive atom plasma processing for shaping of damage free surfaces Science.gov (United States) Carr,; Jeffrey, W [Livermore, CA 2009-03-31 Fabrication apparatus and methods are disclosed for shaping and finishing difficult materials with no subsurface damage. The apparatus and methods use an atmospheric pressure mixed gas plasma discharge as a sub-aperture polisher of, for example, fused silica and single crystal silicon, silicon carbide and other materials. In one example, workpiece material is removed at the atomic level through reaction with fluorine atoms. In this example, these reactive species are produced by a noble gas plasma from trace constituent fluorocarbons or other fluorine containing gases added to the host argon matrix. The products of the reaction are gas phase compounds that flow from the surface of the workpiece, exposing fresh material to the etchant without condensation and redeposition on the newly created surface. The discharge provides a stable and predictable distribution of reactive species permitting the generation of a predetermined surface by translating the plasma across the workpiece along a calculated path. 1. Exploring a potential energy surface by machine learning for characterizing atomic transport Science.gov (United States) Kanamori, Kenta; Toyoura, Kazuaki; Honda, Junya; Hattori, Kazuki; Seko, Atsuto; Karasuyama, Masayuki; Shitara, Kazuki; Shiga, Motoki; Kuwabara, Akihide; Takeuchi, Ichiro 2018-03-01 We propose a machine-learning method for evaluating the potential barrier governing atomic transport based on the preferential selection of dominant points for atomic transport. The proposed method generates numerous random samples of the entire potential energy surface (PES) from a probabilistic Gaussian process model of the PES, which enables defining the likelihood of the dominant points. The robustness and efficiency of the method are demonstrated on a dozen model cases for proton diffusion in oxides, in comparison with a conventional nudge elastic band method. 2. Investigation of graphite composite anodes surfaces by atomic force microscopy and related techniques Energy Technology Data Exchange (ETDEWEB) Hirasawa, Karen Akemi; Nishioka, Keiko; Sato, Tomohiro; Yamaguchi, Shoji; Mori, Shoichiro [Mitsubishi Chemical Corp., Tsukuba Research Center, Ibaraki (Japan) 1997-11-01 The surface of a synthetic graphite (KS-44) and polyvinylidene difluoride binder (PVDF) anode for lithium-ion secondary batteries is imaged using atomic force microscopy (AFM) and several related scanning probe microscope (SPM) instruments including: dynamic force microscopy (DFM), friction force microscopy (FFM), laterally-modulated friction force microscopy (LM-FFM), visco-elasticity atomic force microscopy (VE-AFM), and AFM/simultaneous current measurement mode (SCM). DFM is found to be an exceptional mode for topographic imaging while FFM results in the clearest contrast distinction between PVDF binder and KS-44 graphite regions. (orig.) 3. Reversal of atomic contrast in scanning probe microscopy on (111) metal surfaces Czech Academy of Sciences Publication Activity Database Ondráček, Martin; González, C.; Jelínek, Pavel 2012-01-01 Roč. 24, 08 (2012), 084003/1-084003/7 ISSN 0953-8984 R&D Projects: GA ČR(CZ) GPP204/11/P578; GA ČR GAP204/10/0952; GA ČR GA202/09/0545; GA MŠk(CZ) ME10076 Grant - others:AVČR(CZ) M100100904 Institutional research plan: CEZ:AV0Z10100521 Keywords : atomic force microscopy * metallic surfaces * atomic contrast * scanning tunneling microscopy Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 2.355, year: 2012 http://iopscience.iop.org/0953-8984/24/8/084003 4. Surface PEGylation of mesoporous silica materials via surface-initiated chain transfer free radical polymerization: Characterization and controlled drug release. Science.gov (United States) Huang, Long; Liu, Meiying; Mao, Liucheng; Huang, Qiang; Huang, Hongye; Wan, Qing; Tian, Jianwen; Wen, Yuanqing; Zhang, Xiaoyong; Wei, Yen 2017-12-01 5. Experimental studies of ions and atoms interaction with insulating surface; Etude experimentale de l'interaction rasante d'atomes et d'ions sur des surfaces isolantes Energy Technology Data Exchange (ETDEWEB) Villette, J 2000-10-15 Grazing collisions (<3 deg.) of keV ions and atoms: H{sup +}, Ne{sup +}, Ne{sup 0}, Na{sup +} on LiF (001) single crystal, an ionic insulator, are investigated by a time of flight technique. The incident beam is chopped and the scattered particles are collected on a position sensitive detector providing differential cross section while the time of flight gives the energy loss. Deflection plates allow the charge state analysis. Secondary electrons are detected in coincidence allowing direct measurements of electron emission yield, angular and energetic distribution through time of flight measurements. The target electronic structure characterized by a large band gap, governs the collisional processes: charge exchange, electronic excitations and electron emission. In particular, these studies show that the population of local target excitations surface excitons is the major contribution to the kinetic energy transfer (stopping power). Auger neutralization of Ne{sup +} and He{sup +} ions reveals the population of quasi-molecular excitons, an exciton bound on two holes. Referenced in the literature as trion. A direct energy balance determines the binding energy associated with these excited states of the surface. Besides these electronic energy loss processes, two nuclear energy loss mechanisms are characterized. These processes imply momentum transfer to individual target atoms during close binary collisions or, if the projectile is charged, to collective mode of optical phonons induced by the projectile coulomb field. The effect of the temperature on the scattering profile, the contribution of topological surface defects to the energy loss profile and to skipping motion on the surface are analyzed in view of classical trajectory simulations. (author) 6. Atomic-Scale Visualization of Quasiparticle Interference on a Type-II Weyl Semimetal Surface. Science.gov (United States) Zheng, Hao; Bian, Guang; Chang, Guoqing; Lu, Hong; Xu, Su-Yang; Wang, Guangqiang; Chang, Tay-Rong; Zhang, Songtian; Belopolski, Ilya; Alidoust, Nasser; Sanchez, Daniel S; Song, Fengqi; Jeng, Horng-Tay; Yao, Nan; Bansil, Arun; Jia, Shuang; Lin, Hsin; Hasan, M Zahid 2016-12-23 We combine quasiparticle interference simulation (theory) and atomic resolution scanning tunneling spectromicroscopy (experiment) to visualize the interference patterns on a type-II Weyl semimetal Mo_{x}W_{1-x}Te_{2} for the first time. Our simulation based on first-principles band topology theoretically reveals the surface electron scattering behavior. We identify the topological Fermi arc states and reveal the scattering properties of the surface states in Mo_{0.66}W_{0.34}Te_{2}. In addition, our result reveals an experimental signature of the topology via the interconnectivity of bulk and surface states, which is essential for understanding the unusual nature of this material. 7. Stripping scattering of fast atoms on surfaces of metal-oxide crystals and ultrathin films; Streifende Streuung schneller Atome an Oberflaechen von Metalloxid-Kristallen und ultraduennen Filmen Energy Technology Data Exchange (ETDEWEB) Blauth, David 2010-03-11 In the framework of the present dissertation the interactions of fast atoms with surfaces of bulk oxides, metals and thin films on metals were studied. The experiments were performed in the regime of grazing incidence of atoms with energies of some keV. The advantage of this scattering geometry is the high surface sensibility and thus the possibility to determine the crystallographic and electronic characteristics of the topmost surface layer. In addition to these experiments, the energy loss and the electron emission induced by scattered projectiles was investigated. The energy for electron emission and exciton excitation on Alumina/NiAl(110) and SiO{sub 2}/Mo(112) are determined. By detection of the number of projectile induced emitted electrons as function of azimuthal angle for the rotation of the target surface, the geometrical structure of atoms forming the topmost layer of different adsorbate films on metal surfaces where determined via ion beam triangulation. (orig.) 8. Electron mobility on the surface of liquid Helium: influence of surface level atoms and depopulation of lowest subbands International Nuclear Information System (INIS) Grigoriev, P. D.; Dyugaev, A. M.; Lebedeva, E. V. 2008-01-01 The temperature dependence of electron mobility is examined. We calculate the contribution to the electron scattering rate from the surface level atoms (SLAs), proposed in [10]. This contribution is substantial at low temperatures T < 0.5, when the He vapor concentration is exponentially small. We also study the effect of depopulation of the lowest energy subband, which leads to an increase in the electron mobility at high temperature. The results explain certain long-standing discrepancies between the existing theory and experiment on electron mobility on the surface of liquid helium 9. Mapping Hydrophobicity on the Protein Molecular Surface at Atom-Level Resolution Science.gov (United States) Nicolau Jr., Dan V.; Paszek, Ewa; Fulga, Florin; Nicolau, Dan V. 2014-01-01 A precise representation of the spatial distribution of hydrophobicity, hydrophilicity and charges on the molecular surface of proteins is critical for the understanding of the interaction with small molecules and larger systems. The representation of hydrophobicity is rarely done at atom-level, as this property is generally assigned to residues. A new methodology for the derivation of atomic hydrophobicity from any amino acid-based hydrophobicity scale was used to derive 8 sets of atomic hydrophobicities, one of which was used to generate the molecular surfaces for 35 proteins with convex structures, 5 of which, i.e., lysozyme, ribonuclease, hemoglobin, albumin and IgG, have been analyzed in more detail. Sets of the molecular surfaces of the model proteins have been constructed using spherical probes with increasingly large radii, from 1.4 to 20 Å, followed by the quantification of (i) the surface hydrophobicity; (ii) their respective molecular surface areas, i.e., total, hydrophilic and hydrophobic area; and (iii) their relative densities, i.e., divided by the total molecular area; or specific densities, i.e., divided by property-specific area. Compared with the amino acid-based formalism, the atom-level description reveals molecular surfaces which (i) present an approximately two times more hydrophilic areas; with (ii) less extended, but between 2 to 5 times more intense hydrophilic patches; and (iii) 3 to 20 times more extended hydrophobic areas. The hydrophobic areas are also approximately 2 times more hydrophobicity-intense. This, more pronounced “leopard skin”-like, design of the protein molecular surface has been confirmed by comparing the results for a restricted set of homologous proteins, i.e., hemoglobins diverging by only one residue (Trp37). These results suggest that the representation of hydrophobicity on the protein molecular surfaces at atom-level resolution, coupled with the probing of the molecular surface at different geometric resolutions 10. Molecular dynamics study of the interactions of incident N or Ti atoms with the TiN(001) surface International Nuclear Information System (INIS) Xu, Zhenhai; Zeng, Quanren; Yuan, Lin; Qin, Yi; Chen, Mingjun; Shan, Debin 2016-01-01 Graphical abstract: - Highlights: • Interactions of incident N or Ti atoms with TiN(001) surface are studied by CMD. • The impact position of incident N on the surface determines the interaction modes. • Adsorption could occur due to the atomic exchange process. • Resputtering and reflection may simultaneously occur. • The initial sticking coefficient of N on TiN(001) is much smaller than that of Ti. - Abstract: The interaction processes between incident N or Ti atoms and the TiN(001) surface are simulated by classical molecular dynamics based on the second nearest-neighbor modified embedded-atom method potentials. The simulations are carried out for substrate temperatures between 300 and 700 K and kinetic energies of the incident atoms within the range of 0.5–10 eV. When N atoms impact against the surface, adsorption, resputtering and reflection of particles are observed; several unique atomic mechanisms are identified to account for these interactions, in which the adsorption could occur due to the atomic exchange process while the resputtering and reflection may simultaneously occur. The impact position of incident N atoms on the surface plays an important role in determining the interaction modes. Their occurrence probabilities are dependent on the kinetic energy of incident N atoms but independent on the substrate temperature. When Ti atoms are the incident particles, adsorption is the predominant interaction mode between particles and the surface. This results in the much smaller initial sticking coefficient of N atoms on the TiN(001) surface compared with that of Ti atoms. Stoichiometric TiN is promoted by N/Ti flux ratios larger than one. 11. Engineering of poly(ethylene glycol) chain-tethered surfaces to obtain high-performance bionanoparticles International Nuclear Information System (INIS) Nagasaki, Yukio 2010-01-01 A poly(ethylene glycol)-b-poly[2-(N,N-dimethylamino)ethyl methacrylate] block copolymer possessing a reactive acetal group at the end of the poly(ethylene glycol) (PEG) chain, that is, acetal-PEG-b-PAMA, was synthesized by a proprietary polymerization technique. Gold nanoparticles (GNPs) were prepared using the thus-synthesized acetal-PEG-b-PAMA block copolymer. The PEG-b-PAMA not only acted as a reducing agent of aurate ions but also attached to the nanoparticle surface. The GNPs obtained had controlled sizes and narrow size distributions. They also showed high dispersion stability owing to the presence of PEG tethering chains on the surface. The same strategy should also be applicable to the fabrication of semiconductor quantum dots and inorganic porous nanoparticles. The preparation of nanoparticles in situ, i.e. in the presence of acetal-PEG-b-PAMA, gave the most densely packed polymer layer on the nanoparticle surface; this was not observed when coating preformed nanoparticles. PEG/polyamine block copolymer was more functional on the metal surface than PEG/polyamine graft copolymer, as confirmed by angle-dependent x-ray photoelectron spectroscopy. We successfully solubilized the C 60 fullerene into aqueous media using acetal-PEG-b-PAMA. A C 60 /acetal-PEG-b-PAMA complex with a size below 5 nm was obtained by dialysis. The preparation and characterization of these materials are described in this review. (topical review) 12. Nature of the concentration thresholds of europium atom yield from the oxidized tungsten surface under electron stimulated desorption CERN Document Server Davydov, S Y 2002-01-01 The nature of the electron-stimulated desorption (ESD) of the europium atoms by the E sub e irradiating electrons energies, equal to 50 and 80 eV, as well as peculiarities of the Eu atoms yield dependence on their concentration on the oxidized tungsten surface are discussed. It is shown, that the ESD originates by the electron transition from the interval 5p- or 5s shell of the tungsten surface atom onto the oxygen external unfilled 2p-level 13. Phonon dispersion on Ag (100) surface: A modified analytic embedded atom method study International Nuclear Information System (INIS) Zhang Xiao-Jun; Chen Chang-Le 2016-01-01 Within the harmonic approximation, the analytic expression of the dynamical matrix is derived based on the modified analytic embedded atom method (MAEAM) and the dynamics theory of surface lattice. The surface phonon dispersions along three major symmetry directions, and XM-bar are calculated for the clean Ag (100) surface by using our derived formulas. We then discuss the polarization and localization of surface modes at points X-bar and M-bar by plotting the squared polarization vectors as a function of the layer index. The phonon frequencies of the surface modes calculated by MAEAM are compared with the available experimental and other theoretical data. It is found that the present results are generally in agreement with the referenced experimental or theoretical results, with a maximum deviation of 10.4%. The agreement shows that the modified analytic embedded atom method is a reasonable many-body potential model to quickly describe the surface lattice vibration. It also lays a significant foundation for studying the surface lattice vibration in other metals. (paper) 14. DNA adsorption and desorption on mica surface studied by atomic force microscopy International Nuclear Information System (INIS) Sun Lanlan; Zhao Dongxu; Zhang Yue; Xu Fugang; Li Zhuang 2011-01-01 The adsorption of DNA molecules on mica surface and the following desorption of DNA molecules at ethanol-mica interface were studied using atomic force microscopy. By changing DNA concentration, different morphologies on mica surface have been observed. A very uniform and orderly monolayer of DNA molecules was constructed on the mica surface with a DNA concentration of 30 ng/μL. When the samples were immersed into ethanol for about 15 min, various desorption degree of DNA from mica (0-99%) was achieved. It was found that with the increase of DNA concentration, the desorption degree of DNA from the mica at ethanol-mica interface decreased. And when the uniform and orderly DNA monolayers were formed on the mica surface, almost no DNA molecule desorbed from the mica surface in this process. The results indicated that the uniform and orderly DNA monolayer is one of the most stable DNA structures formed on the mica surface. In addition, we have studied the structure change of DNA molecules after desorbed from the mica surface with atomic force microscopy, and found that the desorption might be ascribed to the ethanol-induced DNA condensation. 15. DNA adsorption and desorption on mica surface studied by atomic force microscopy Energy Technology Data Exchange (ETDEWEB) 2011-05-15 The adsorption of DNA molecules on mica surface and the following desorption of DNA molecules at ethanol-mica interface were studied using atomic force microscopy. By changing DNA concentration, different morphologies on mica surface have been observed. A very uniform and orderly monolayer of DNA molecules was constructed on the mica surface with a DNA concentration of 30 ng/{mu}L. When the samples were immersed into ethanol for about 15 min, various desorption degree of DNA from mica (0-99%) was achieved. It was found that with the increase of DNA concentration, the desorption degree of DNA from the mica at ethanol-mica interface decreased. And when the uniform and orderly DNA monolayers were formed on the mica surface, almost no DNA molecule desorbed from the mica surface in this process. The results indicated that the uniform and orderly DNA monolayer is one of the most stable DNA structures formed on the mica surface. In addition, we have studied the structure change of DNA molecules after desorbed from the mica surface with atomic force microscopy, and found that the desorption might be ascribed to the ethanol-induced DNA condensation. 16. Atomic structure and composition of the yttria-stabilized zirconia (111) surface. Science.gov (United States) Vonk, Vedran; Khorshidi, Navid; Stierle, Andreas; Dosch, Helmut 2013-06-01 Anomalous and nonanomalous surface X-ray diffraction is used to investigate the atomic structure and composition of the yttria-stabilized zirconia (YSZ)(111) surface. By simulation it is shown that the method is sensitive to Y surface segregation, but that the data must contain high enough Fourier components in order to distinguish between different models describing Y/Zr disorder. Data were collected at room temperature after two different annealing procedures. First by applying oxidative conditions at 10 - 5  mbar O 2 and 700 K to the as-received samples, where we find that about 30% of the surface is covered by oxide islands, which are depleted in Y as compared with the bulk. After annealing in ultrahigh vacuum at 1270 K the island morphology of the surface remains unchanged but the islands and the first near surface layer get significantly enriched in Y. Furthermore, the observation of Zr and oxygen vacancies implies the formation of a porous surface region. Our findings have important implications for the use of YSZ as solid oxide fuel cell electrode material where yttrium atoms and zirconium vacancies can act as reactive centers, as well as for the use of YSZ as substrate material for thin film and nanoparticle growth where defects control the nucleation process. 17. Magnetic Dichroism of Potassium Atoms on the Surface of Helium Nanodroplets International Nuclear Information System (INIS) Nagl, Johann; Auboeck, Gerald; Callegari, Carlo; Ernst, Wolfgang E. 2007-01-01 The population ratio of Zeeman sublevels of atoms on the surface of superfluid helium droplets (T=0.37 K) has been measured. Laser induced fluorescence spectra of K atoms are measured in the presence of a moderately strong magnetic field (2.9 kG). The relative difference between the two states of circular polarization of the exciting laser is used to determine the electron spin polarization of the ensemble. Equal fluorescence levels indicate that the two spin sublevels of the ground-state K atom are equipopulated, within 1%. Thermalization to 0.37 K would give a population ratio of 0.35. We deduce that the rate of spin relaxation induced by the droplet must be 2 triplet dimer we find instead full thermalization of the spin 18. Influence of elastic-like relaxation on the size distribution of monatomic Ag chains on the steps of a vicinal Pt surface International Nuclear Information System (INIS) Tokar, V.I.; Dreysse, H. 2007-01-01 We discuss the statistics of the chains of Ag atoms self-assembled on the steps of a vicinal Pt surface as established experimentally and calculated within a lattice gas model by Gambardella et al. [Phys. Rev. B 73 (2006) 245425]. We suggest that the discrepancy between the theory and experiment may be due to additional interatomic interactions inside the clusters unaccounted for in the model. Our consideration is based on an exactly solvable one-dimensional equilibrium model of self-assembly proposed by us recently. We argue that the model provides an adequate approximate description of the Ag/Pt system and show that the chain length distribution in the model can be fitted to the experimental data with high accuracy 19. Self-interacting polymer chains terminally anchored to adsorbing surfaces of three-dimensional fractal lattices Science.gov (United States) Živić, I.; Elezović-Hadžić, S.; Milošević, S. 2018-01-01 We have studied the adsorption problem of self-attracting linear polymers, modeled by self-avoiding walks (SAWs), situated on three-dimensional fractal structures, exemplified by 3d Sierpinski gasket (SG) family of fractals as containers of a poor solvent. Members of SG family are enumerated by an integer b (b ≥ 2), and it is assumed that one side of each SG fractal is an impenetrable adsorbing surface. We calculate the critical exponents γ1 ,γ11, and γs, which are related to the numbers of all possible SAWs with one, both, and no ends anchored to the adsorbing boundary, respectively. By applying the exact renormalization group (RG) method (for the first three members of the SG fractal family, b = 2 , 3, and 4), we have obtained specific values of these exponents, for θ-chain and globular polymer phase. We discuss their mutual relations and relations with corresponding values pertinent to extended polymer chain phase. 20. Density Functional Theory and Atomic Force Microscopy Study of Oleate Functioned on Siderite Surface Directory of Open Access Journals (Sweden) Lixia Li 2018-01-01 Full Text Available Efficiently discovering the interaction of the collector oleate and siderite is of great significance for understanding the inherent function of siderite weakening hematite reverse flotation. For this purpose, investigation of the adsorption behavior of oleate on siderite surface was performed by density functional theory (DFT calculations associating with atomic force microscopy (AFM imaging. The siderite crystal geometry was computationally optimized via convergence tests. Calculated results of the interaction energy and the Mulliken population verified that the collector oleate adsorbed on siderite surface and the covalent bond was established as a result of electrons transferring from O1 atoms (in oleate molecule to Fe1 atoms (in siderite lattice. Therefore, valence-electrons’ configurations of Fe1 and O1 changed into 3d6.514s0.37 and 2s1.832p4.73 from 3d6.214s0.31 and 2s1.83p4.88 correspondingly. Siderite surfaces with or without oleate functioned were examined with the aid of AFM imaging in PeakForce Tapping mode, and the functioned siderite surface was found to be covered by vesicular membrane matters with the average roughness of 16.4 nm assuring the oleate adsorption. These results contributed to comprehending the interaction of oleate and siderite. 1. Hydrogen atom addition to the surface of graphene nanoflakes: A density functional theory study Science.gov (United States) Tachikawa, Hiroto 2017-02-01 Polycyclic aromatic hydrocarbons (PAHs) provide a 2-dimensional (2D) reaction surface in 3-dimensional (3D) interstellar space and have been utilized as a model of graphene surfaces. In the present study, the reaction of PAHs with atomic hydrogen was investigated by means of density functional theory (DFT) to systematically elucidate the binding nature of atomic hydrogen to graphene nanoflakes. PAHs with n = 4-37 were chosen, where n indicates the number of benzene rings. Activation energies of hydrogen addition to the graphene surface were calculated to be 5.2-7.0 kcal/mol at the CAM-B3LYP/6-311G(d,p) level, which is almost constant for all PAHs. The binding energies of hydrogen atom were slightly dependent on the size (n): 14.8-28.5 kcal/mol. The absorption spectra showed that a long tail is generated at the low-energy region after hydrogen addition to the graphene surface. The electronic states of hydrogenated graphenes were discussed on the basis of theoretical results. 2. Atomic structure of the SbCu surface alloy: A surface X-ray diffraction study DEFF Research Database (Denmark) Meunier, I.; Gay, J.M.; Lapena, L. 1999-01-01 The dissolution at 400 degrees C of an antimony layer deposited at room temperature on a Cu(111) substrate leads to a surface alloy with a p(root 3x root 3)R 30 degrees x 30 degrees superstructure and a Sb composition of 1/3.We present here a structural study of this Sb-Cu compound by surface X... 3. The surface reactivity of acrylonitrile with oxygen atoms on an analogue of interstellar dust grains Science.gov (United States) Kimber, Helen J.; Toscano, Jutta; Price, Stephen D. 2018-06-01 Experiments designed to reveal the low-temperature reactivity on the surfaces of interstellar dust grains are used to probe the heterogeneous reaction between oxygen atoms and acrylonitrile (C2H3CN, H2C=CH-CN). The reaction is studied at a series of fixed surface temperatures between 14 and 100 K. After dosing the reactants on to the surface, temperature-programmed desorption, coupled with time-of-flight mass spectrometry, reveals the formation of a product with the molecular formula C3H3NO. This product results from the addition of a single oxygen atom to the acrylonitrile reactant. The oxygen atom attack appears to occur exclusively at the C=C double bond, rather than involving the cyano(-CN) group. The absence of reactivity at the cyano site hints that full saturation of organic molecules on dust grains may not always occur in the interstellar medium. Modelling the experimental data provides a reaction probability of 0.007 ± 0.003 for a Langmuir-Hinshelwood style (diffusive) reaction mechanism. Desorption energies for acrylonitrile, oxygen atoms, and molecular oxygen, from the multilayer mixed ice their deposition forms, are also extracted from the kinetic model and are 22.7 ± 1.0 kJ mol-1 (2730 ± 120 K), 14.2 ± 1.0 kJ mol-1 (1710 ± 120 K), and 8.5 ± 0.8 kJ mol-1 (1020 ± 100 K), respectively. The kinetic parameters we extract from our experiments indicate that the reaction between atomic oxygen and acrylonitrile could occur on interstellar dust grains on an astrophysical time-scale. 4. Surface recombination of oxygen atoms in O2 plasma at increased pressure: II. Vibrational temperature and surface production of ozone Science.gov (United States) Lopaev, D. V.; Malykhin, E. M.; Zyryanov, S. M. 2011-01-01 Ozone production in an oxygen glow discharge in a quartz tube was studied in the pressure range of 10-50 Torr. The O3 density distribution along the tube diameter was measured by UV absorption spectroscopy, and ozone vibrational temperature TV was found comparing the calculated ab initio absorption spectra with the experimental ones. It has been shown that the O3 production mainly occurs on a tube surface whereas ozone is lost in the tube centre where in contrast the electron and oxygen atom densities are maximal. Two models were used to analyse the obtained results. The first one is a kinetic 1D model for the processes occurring near the tube walls with the participation of the main particles: O(3P), O2, O2(1Δg) and O3 molecules in different vibrational states. The agreement of O3 and O(3P) density profiles and TV calculated in the model with observed ones was reached by varying the single model parameter—ozone production probability (\\gamma_{O_{3}}) on the quartz tube surface on the assumption that O3 production occurs mainly in the surface recombination of physisorbed O(3P) and O2. The phenomenological model of the surface processes with the participation of oxygen atoms and molecules including singlet oxygen molecules was also considered to analyse \\gamma_{O_{3}} data obtained in the kinetic model. A good agreement between the experimental data and the data of both models—the kinetic 1D model and the phenomenological surface model—was obtained in the full range of the studied conditions that allowed consideration of the ozone surface production mechanism in more detail. The important role of singlet oxygen in ozone surface production was shown. The O3 surface production rate directly depends on the density of physisorbed oxygen atoms and molecules and can be high with increasing pressure and energy inputted into plasma while simultaneously keeping the surface temperature low enough. Using the special discharge cell design, such an approach opens up the 5. Surface recombination of oxygen atoms in O2 plasma at increased pressure: II. Vibrational temperature and surface production of ozone International Nuclear Information System (INIS) Lopaev, D V; Malykhin, E M; Zyryanov, S M 2011-01-01 Ozone production in an oxygen glow discharge in a quartz tube was studied in the pressure range of 10-50 Torr. The O 3 density distribution along the tube diameter was measured by UV absorption spectroscopy, and ozone vibrational temperature T V was found comparing the calculated ab initio absorption spectra with the experimental ones. It has been shown that the O 3 production mainly occurs on a tube surface whereas ozone is lost in the tube centre where in contrast the electron and oxygen atom densities are maximal. Two models were used to analyse the obtained results. The first one is a kinetic 1D model for the processes occurring near the tube walls with the participation of the main particles: O( 3 P), O 2 , O 2 ( 1 Δ g ) and O 3 molecules in different vibrational states. The agreement of O 3 and O( 3 P) density profiles and T V calculated in the model with observed ones was reached by varying the single model parameter-ozone production probability (γ O 3 ) on the quartz tube surface on the assumption that O 3 production occurs mainly in the surface recombination of physisorbed O( 3 P) and O 2 . The phenomenological model of the surface processes with the participation of oxygen atoms and molecules including singlet oxygen molecules was also considered to analyse γ O 3 data obtained in the kinetic model. A good agreement between the experimental data and the data of both models-the kinetic 1D model and the phenomenological surface model-was obtained in the full range of the studied conditions that allowed consideration of the ozone surface production mechanism in more detail. The important role of singlet oxygen in ozone surface production was shown. The O 3 surface production rate directly depends on the density of physisorbed oxygen atoms and molecules and can be high with increasing pressure and energy inputted into plasma while simultaneously keeping the surface temperature low enough. Using the special discharge cell design, such an approach opens up 6. Drop impacts onto cold and heated rigid surfaces: Morphological comparisons, disintegration limits and secondary atomization International Nuclear Information System (INIS) Moita, A.S.; Moreira, A.L.N. 2007-01-01 This paper addresses an experimental study aimed at characterizing the mechanisms of disintegration which occur when individual water and fuel droplets impact onto heated surfaces. The experiments consider the use of a simplified flow configuration and make use of high-speed visualization together with image processing techniques to characterize the morphology of the impact and to quantify the outcome of secondary atomization in terms of droplet size and number. The results evidence that surface topography, wettability and liquid properties combine in a complex way to alter the wetting behaviour of droplets at impact at different surface temperatures. The relative importance of the dynamic vapor pressure associated with the rate of vaporization and surface roughness increases with surface temperature and becomes dominant at the film boiling regime. The analysis is aimed at giving a phenomenological description of droplet disintegration within the various heat transfer regimes 7. Measuring adhesion on rough surfaces using atomic force microscopy with a liquid probe Directory of Open Access Journals (Sweden) Juan V. Escobar 2017-04-01 Full Text Available We present a procedure to perform and interpret pull-off force measurements during the jump-off-contact process between a liquid drop and rough surfaces using a conventional atomic force microscope. In this method, a micrometric liquid mercury drop is attached to an AFM tipless cantilever to measure the force required to pull this drop off a rough surface. We test the method with two surfaces: a square array of nanometer-sized peaks commonly used for the determination of AFM tip sharpness and a multi-scaled rough diamond surface containing sub-micrometer protrusions. Measurements are carried out in a nitrogen atmosphere to avoid water capillary interactions. We obtain information about the average force of adhesion between a single peak or protrusion and the liquid drop. This procedure could provide useful microscopic information to improve our understanding of wetting phenomena on rough surfaces. 8. Distinction of heterogeneity on Au nanostructured surface based on phase contrast imaging of atomic force microscopy International Nuclear Information System (INIS) Jung, Mi; Choi, Jeong-Woo 2010-01-01 The discrimination of the heterogeneity of different materials on nanostructured surfaces has attracted a great deal of interest in biotechnology as well as nanotechnology. Phase imaging through tapping mode of atomic force microscopy (TMAFM) can be used to distinguish the heterogeneity on a nanostructured surface. Nanostructures were fabricated using anodic aluminum oxide (AAO). An 11-mercaptoundecanoic acid (11-MUA) layer adsorbed onto the Au nanodots through self-assembly to improve the bio-compatibility. The Au nanostructures that were modified with 11-MUA and the concave surfaces were investigated using the TMAFM phase images to compare the heterogeneous and homogeneous nanostructured surfaces. Although the topography and phase images were taken simultaneously, the images were different. Therefore, the contrast in the TMAFM phase images revealed the different compositional materials on the heterogeneous nanostructure surface. 9. Preparation of nanocomposites by reversible addition-fragmentation chain transfer polymerization from the surface of quantum dots in miniemulsion NARCIS (Netherlands) Carvalho Esteves, de A.C.; Hodge, P.; Trindade, T.; Barros-Timmons, A.M.M.V. 2009-01-01 Herein, we report the synthesis of quantum dots (QDs)/polymer nanocomposites by reversible addition-fragmentation chain transfer (RAFT) polymerization in miniemulsions using a grafting from approach. First, the surfaces of CdS and CdSe QDs were functionalized using a chain transfer agent, a 10. Adsorption of charged and neutral polymer chains on silica surfaces: The role of electrostatics, volume exclusion, and hydrogen bonding NARCIS (Netherlands) Spruijt, Evan; Biesheuvel, P.M.; de Vos, Wiebe Matthijs 2015-01-01 We develop an off-lattice (continuum) model to describe the adsorption of neutral polymer chains and polyelectrolytes to surfaces. Our continuum description allows taking excluded volume interactions between polymer chains and ions directly into account. To implement those interactions, we use a 11. Phonon-mediated decay of an atom in a surface-induced potential International Nuclear Information System (INIS) Kien, Fam Le; Hakuta, K.; Dutta Gupta, S. 2007-01-01 We study phonon-mediated transitions between translational levels of an atom in a surface-induced potential. We present a general master equation governing the dynamics of the translational states of the atom. In the framework of the Debye model, we derive compact expressions for the rates for both upward and downward transitions. Numerical calculations for the transition rates are performed for a deep silica-induced potential allowing for a large number of bound levels as well as free states of a cesium atom. The total absorption rate is shown to be determined mainly by the bound-to-bound transitions for deep bound levels and by bound-to-free transitions for shallow bound levels. Moreover, the phonon emission and absorption processes can be orders of magnitude larger for deep bound levels as compared to the shallow bound ones. We also study various types of transitions from free states. We show that, for thermal atomic cesium with a temperature in the range from 100 μK to 400 μK in the vicinity of a silica surface with a temperature of 300 K, the adsorption (free-to-bound decay) rate is about two times larger than the heating (free-to-free upward decay) rate, while the cooling (free-to-free downward decay) rate is negligible 12. A cellular automata simulation study of surface roughening resulting from multi-atom etch pit generation during sputtering Energy Technology Data Exchange (ETDEWEB) Toh, Y S; Nobes, M J; Carter, G [Dept. of Electronic and Electrical Engineering, Univ. of Salford (United Kingdom) 1992-04-01 A two-dimensional square matrix of pseudo-atomic positions is erected and atom removal from the ''surface'' is effected randomly. Either single atoms or groups of atoms (to simulate multi-atom pit generation) are removed. The characteristics of the evolving roughened, terraced ''surface'' are evaluated as a function of the total number of atoms, or equivalent numbers of atomic layers, removed. These characteristics include the ''mean'' position of the sputtered surface, the standard deviation of terrace length about the mean and the form of the terrace length distributions. The results of the single-atom removal mode compare exactly with theoretical predictions in that, for large numbers of atoms removed the depth position of the mean of the terrace length distribution is identical to the mean sputtered depth and the standard deviation increases as the square root of this depth. For multi-atom removal modes (which cannot be predicted theoretically) the standard deviation also increases as the square root of the mean sputtered depth but with a larger proportionality constant. The implications of these observations for the evolution of surface morphology during high yield sputtering is discussed. (orig.). 13. Topography and Mechanical Property Mapping of International Simple Glass Surfaces with Atomic Force Microscopy Energy Technology Data Exchange (ETDEWEB) Pierce, Eric M [ORNL 2014-01-01 Quantitative Nanomechanical Peak Force (PF-QNM) TappingModeTM atomic force microscopy measurements are presented for the first time on polished glass surfaces. The PF-QNM technique allows for topography and mechanical property information to be measured simultaneously at each pixel. Results for the international simple glass which represents a simplified version of SON68 glass suggests an average Young s modulus of 78.8 15.1 GPa is within the experimental error of the modulus measured for SON68 glass (83.6 2 GPa) with conventional approaches. Application of the PF-QNM technique will be extended to in situ glass corrosion experiments with the goal of gaining atomic-scale insights into altered layer development by exploiting the mechanical property differences that exist between silica gel (e.g., altered layer) and pristine glass surface. 14. Second order classical perturbation theory for atom surface scattering: Analysis of asymmetry in the angular distribution Energy Technology Data Exchange (ETDEWEB) Zhou, Yun, E-mail: [email protected]; Pollak, Eli, E-mail: [email protected] [Chemical Physics Department, Weizmann Institute of Science, 76100 Rehovot (Israel); Miret-Artés, Salvador, E-mail: [email protected] [Instituto de Fisica Fundamental, Consejo Superior de Investigaciones Cientificas, Serrano 123, 28006 Madrid (Spain) 2014-01-14 A second order classical perturbation theory is developed and applied to elastic atom corrugated surface scattering. The resulting theory accounts for experimentally observed asymmetry in the final angular distributions. These include qualitative features, such as reduction of the asymmetry in the intensity of the rainbow peaks with increased incidence energy as well as the asymmetry in the location of the rainbow peaks with respect to the specular scattering angle. The theory is especially applicable to “soft” corrugated potentials. Expressions for the angular distribution are derived for the exponential repulsive and Morse potential models. The theory is implemented numerically to a simplified model of the scattering of an Ar atom from a LiF(100) surface. 15. Second order classical perturbation theory for atom surface scattering: analysis of asymmetry in the angular distribution. Science.gov (United States) Zhou, Yun; Pollak, Eli; Miret-Artés, Salvador 2014-01-14 A second order classical perturbation theory is developed and applied to elastic atom corrugated surface scattering. The resulting theory accounts for experimentally observed asymmetry in the final angular distributions. These include qualitative features, such as reduction of the asymmetry in the intensity of the rainbow peaks with increased incidence energy as well as the asymmetry in the location of the rainbow peaks with respect to the specular scattering angle. The theory is especially applicable to "soft" corrugated potentials. Expressions for the angular distribution are derived for the exponential repulsive and Morse potential models. The theory is implemented numerically to a simplified model of the scattering of an Ar atom from a LiF(100) surface. 16. A Solid-State Deuterium NMR and SFG Study of the Side Chain Dynamics of Peptides Adsorbed onto Surfaces Science.gov (United States) Breen, Nicholas F.; Weidner, Tobias; Li, Kun; Castner, David G.; Drobny, Gary P. 2011-01-01 The artificial amphiphilic peptide LKα14 adopts a helical structure at interfaces, with opposite orientation of its leucine (L, hydrophobic) and lysine (K, hydrophilic) side chains. When adsorbed onto surfaces, different residue side chains necessarily have different proximities to the surface, depending on both their position in the helix and the composition of the surface itself. Deuterating the individual leucine residues (isopropyl-d7) permits the use of solid-state deuterium NMR as a site-specific probe of side chain dynamics. In conjunction with SFG as a probe of the peptide binding face, we demonstrate that the mobility of specific leucine side chains at the interface is quantifiable in terms of their surface proximity. PMID:19764755 17. Study of the Adsorption of Atoms and Molecules on Silicon Surfaces: Crystallographics and Electronic Structure International Nuclear Information System (INIS) Bengio, Silvina 2003-01-01 This thesis work has been concerned with adsorption properties of silicon surfaces.The atomic and electronic structure of molecules and atoms adsorbed on Si has been investigated by means of photoemission experiments combined with synchrotron radiation.The quantitative atomic structure determination was held applying the photoelectron diffraction technique.This technique is sensible to the local structure of a reference atomic specie and has elemental and chemical-state specificity.This approach has been applied to three quite different systems with different degrees of complexity, Sb/Si(111) √3x √3R30 0 , H 2 O/Si(100)2x1 and NH 3 /Si(111)7x7.Our results show that Sb which forms a ( √3√3)R30 0 phase produces a bulklike-terminated Si(111)1x1 substrate free of stacking faults.Regarding the atomic structure of its interface, this study strongly favours the T4-site milkstool model over the H3 one.An important aspect regarding the H 2 O/Si(100)(2x1) system was establishing the limits of precision with which one can determine not only the location of the adsorbed hydroxyl (OH) species, but also the extent to which this adsorption modifes the asymmetric dimers of the clean surface to which it is bonded.On the Si(111)(7x7) surface the problem is particularly complex because there are several different potentially active sites for NH3 adsorption and fragmentation.The application of the PhD method, however, has shown that the majority of the N atoms are on so-called 'rest atom' sites when deposited at RT.This is consistent with the N in the NH2 chemical state.This investigation represents the first quantitative structural study of any molecular adsorbate on the complex Si(111)(7x7) surface.This atomic structures determination shows the PhD is a powerful tool for the atomic structure determination.The molecular systems interacting with the active sites of the substrate fragments producing a short-range order surface.This long-range disorder is produced by the 18. Measurement of the surface susceptibility and the surface conductivity of atomically thin by spectroscopic ellipsometry KAUST Repository Jayaswal, Gaurav; Dai, Zhenyu; Zhang, Xixiang; Bagnarol, Mirko; Martucci, Alessandro; Merano, Michele 2017-01-01 We show how to correctly extract from the ellipsometric data the surface susceptibility and the surface conductivity that describe the optical properties of monolayer $\\rm MoS_2$. Theoretically, these parameters stem from modelling a single-layer two-dimensional crystal as a surface current, a truly two-dimensional model. Currently experimental practice is to consider this model equivalent to a homogeneous slab with an effective thickness given by the interlayer spacing of the exfoliating bulk material. We prove that the error in the evaluation of the surface susceptibility of monolayer $\\rm MoS_2$, owing to the use of the slab model, is at least 10% or greater, a significant discrepancy in the determination of the optical properties of this material. 19. Measurement of the surface susceptibility and the surface conductivity of atomically thin by spectroscopic ellipsometry KAUST Repository Jayaswal, Gaurav 2017-10-01 We show how to correctly extract from the ellipsometric data the surface susceptibility and the surface conductivity that describe the optical properties of monolayer $\\ m MoS_2$. Theoretically, these parameters stem from modelling a single-layer two-dimensional crystal as a surface current, a truly two-dimensional model. Currently experimental practice is to consider this model equivalent to a homogeneous slab with an effective thickness given by the interlayer spacing of the exfoliating bulk material. We prove that the error in the evaluation of the surface susceptibility of monolayer $\\ m MoS_2$, owing to the use of the slab model, is at least 10% or greater, a significant discrepancy in the determination of the optical properties of this material. 20. Adsorption/desorption kinetics of Na atoms on reconstructed Si (111)-7 x 7 surface International Nuclear Information System (INIS) Chauhan, Amit Kumar Singh; Govind; Shivaprasad, S.M. 2010-01-01 Self-assembled nanostructures on a periodic template are fundamentally and technologically important as they put forward the possibility to fabricate and pattern micro/nano-electronics for sensors, ultra high-density memories and nanocatalysts. Alkali-metal (AM) nanostructure grown on a semiconductor surface has received considerable attention because of their simple hydrogen like electronic structure. However, little efforts have been made to understand the fundamental aspects of the growth mechanism of self-assembled nanostructures of AM on semiconductor surfaces. In this paper, we report organized investigation of kinetically controlled room-temperature (RT) adsorption/desorption of sodium (Na) metal atoms on clean reconstructed Si (111)-7 x 7 surface, by X-ray photoelectron spectroscopy (XPS). The RT uptake curve shows a layer-by-layer growth (Frank-vander Merve growth) mode of Na on Si (111)-7 x 7 surfaces and a shift is observed in the binding energy position of Na (1s) spectra. The thermal stability of the Na/Si (111) system was inspected by annealing the system to higher substrate temperatures. Within a temperature range from RT to 350 o C, the temperature induced mobility to the excess Na atoms sitting on top of the bilayer, allowing to arrange themselves. Na atoms desorbed over a wide temperature range of 370 o C, before depleting the Si (111) surface at temperature 720 o C. The acquired valence-band (VB) spectra during Na growth revealed the development of new electronic-states near the Fermi level and desorption leads the termination of these. For Na adsorption up to 2 monolayers, decrease in work function (-1.35 eV) was observed, whereas work function of the system monotonically increases with Na desorption from the Si surface as observed by other studies also. This kinetic and thermodynamic study of Na adsorbed Si (111)-7 x 7 system can be utilized in fabrication of sensors used in night vision devices. 1. Lateral and vertical manipulations of single atoms on the Ag(1 1 1) surface with the copper single-atom and trimer-apex tips International Nuclear Information System (INIS) Xie Yiqun; Yang Tianxing; Ye Xiang; Huang Lei 2011-01-01 We study the lateral and vertical manipulations of single Ag and Cu atoms on the Ag(1 1 1) surface with the Cu single-atom and trimer-apex tips using molecular statics simulations. The reliability of the lateral manipulation with the Cu single-atom tip is investigated, and compared with that for the Ag tips. We find that overall the manipulation reliability (MR) increases with the decreasing tip height, and in a wide tip-height range the MR is better than those for both the Ag single-atom and trimer-apex tips. This is due to the stronger attractive force of the Cu tip and its better stability against the interactions with the Ag surface. With the Cu trimer-apex tip, the single Ag and Cu adatoms can be picked up from the flat Ag(1 1 1) surface, and moreover a reversible vertical manipulation of single Ag atoms on the stepped Ag(1 1 1) surface is possible, suggesting a method to modify two-dimensional Ag nanostructures on the Ag(1 1 1) surface with the Cu trimer-apex tip. 2. Defects in oxide surfaces studied by atomic force and scanning tunneling microscopy Directory of Open Access Journals (Sweden) Thomas König 2011-01-01 Full Text Available Surfaces of thin oxide films were investigated by means of a dual mode NC-AFM/STM. Apart from imaging the surface termination by NC-AFM with atomic resolution, point defects in magnesium oxide on Ag(001 and line defects in aluminum oxide on NiAl(110, respectively, were thoroughly studied. The contact potential was determined by Kelvin probe force microscopy (KPFM and the electronic structure by scanning tunneling spectroscopy (STS. On magnesium oxide, different color centers, i.e., F0, F+, F2+ and divacancies, have different effects on the contact potential. These differences enabled classification and unambiguous differentiation by KPFM. True atomic resolution shows the topography at line defects in aluminum oxide. At these domain boundaries, STS and KPFM verify F2+-like centers, which have been predicted by density functional theory calculations. Thus, by determining the contact potential and the electronic structure with a spatial resolution in the nanometer range, NC-AFM and STM can be successfully applied on thin oxide films beyond imaging the topography of the surface atoms. 3. Interactions between nitrogen molecules and barium atoms on Ru (0001) surface International Nuclear Information System (INIS) Zhao Xinxin; Mi Yiming; Xu Hongxia; Wang Lili; Ren Li; Tao Xiangming; Tan Mingqiu 2011-01-01 We had performed first principles calculations on interactions between nitrogen molecules and barium atoms on Ru (0001) surface using density function theory methods. It was shown that effects of barium atoms weakened the bond strength of nitrogen molecules. The bond length of nitrogen molecule increases from 0.113 nm on Ru (001)-N 2 to 0.120 nm on Ru (001)-N 2 /Ba surface. While stretch vibrational frequency of nitrogen molecule decreased from 2222 cm -1 and charge transfer toward nitrogen molecule increased from 0.3 e to 1.1 e. Charge was mainly translated from 6 s orbitals of barium atoms to 4 d orbitals of substrate, which enhanced the hybridization between 4 d and 2 π orbitals and increased the dipole moment of 5 σ and d π orbitals of nitrogen molecule. The molecular dipole moment of nitrogen molecule was increased by -0.136 e Anstrom. It was suggested that barium had some characters to be an electronic promoter on the process of activating nitrogen molecules on Ru (0001) surface. (authors) 4. Diffractive scattering of H atoms from the (001) surface of LiF at 78 K International Nuclear Information System (INIS) Caracciolo, G.; Iannotta, S.; Scoles, G.; Valbusa, U. 1980-01-01 We have built an apparatus for the measurement of high resolution diffractive scattering of hydrogen atoms from crystal surfaces. The apparatus comprises a hydrogen atom beam source, a hexapolar magnetic field velocity selector, a variable temperature UHV crystal manipulator, and a rotatable bolometer detector. The diffraction pattern of a beam of hydrogen atoms scattered by a (001) LiF surface at 78 K has been obtained for different angles of incidence and different orientations of the crystal. The Debye--Waller factor has been measured leading to a surface Debye temperature theta/sub S/=550 +- 38 K. The corrugated-hard-wall-with-a-well model of Garibaldi et al. [Surf. Sci. 48, 649 (1975)] has been used for the interpretation of the intensities of the diffracted peaks. By means of a best fit procedure we obtain a main ''corrugation'' parameter xi 0 =0.095 A. By comparison of the data with the theory of Cabrera et al. [Surf. Sci. 19, 70 (1967] at the first order, the strength parameters of a periodic Morse potential have been determined 5. Quantitative characterization of the atomic-scale structure of oxyhydroxides in rusts formed on steel surfaces International Nuclear Information System (INIS) Saito, M.; Suzuki, S.; Kimura, M.; Suzuki, T.; Kihira, H.; Waseda, Y. 2005-01-01 Quantitative X-ray structural analysis coupled with anomalous X-ray scattering has been used for characterizing the atomic-scale structure of rust formed on steel surfaces. Samples were prepared from rust layers formed on the surfaces of two commercial steels. X-ray scattered intensity profiles of the two samples showed that the rusts consisted mainly of two types of ferric oxyhydroxide, α-FeOOH and γ-FeOOH. The amounts of these rust components and the realistic atomic arrangements in the components were estimated by fitting both the ordinary and the environmental interference functions with a model structure calculated using the reverse Monte Carlo simulation technique. The two rust components were found to be the network structure formed by FeO 6 octahedral units, the network structure itself deviating from the ideal case. The present results also suggest that the structural analysis method using anomalous X-ray scattering and the reverse Monte Carlo technique is very successful in determining the atomic-scale structure of rusts formed on the steel surfaces 6. Influences of H on the Adsorption of a Single Ag Atom on Si(111-7 × 7 Surface Directory of Open Access Journals (Sweden) Lin Xiu-Zhu 2009-01-01 Full Text Available Abstract The adsorption of a single Ag atom on both clear Si(111-7 × 7 and 19 hydrogen terminated Si(111-7 × 7 (hereafter referred as 19H-Si(111-7 × 7 surfaces has been investigated using first-principles calculations. The results indicated that the pre-adsorbed H on Si surface altered the surface electronic properties of Si and influenced the adsorption properties of Ag atom on the H terminated Si surface (e.g., adsorption site and bonding properties. Difference charge density data indicated that covalent bond is formed between adsorbed Ag and H atoms on 19H-Si(111-7 × 7 surface, which increases the adsorption energy of Ag atom on Si surface. 7. Hydrophilization of poly(ether ether ketone) films by surface-initiated atom transfer radical polymerization DEFF Research Database (Denmark) Fristrup, Charlotte Juel; Jankova Atanasova, Katja; Hvilsted, Søren 2010-01-01 Surface-Initiated Atom Transfer Radical Polymerization (SI-ATRP) has been exploited to hydrophilize PEEK. The ketone groups on the PEEK surface were reduced to hydroxyl groups which were converted to bromoisobutyrate initiating sites for SI-ATRP. The modification steps were followed by contact...... angle measurements and XPS. Moreover, ATR FTIR has been used to confirm the formation of initiating groups. Grafting of PEGMA from PEEK was performed in aqueous solution. The presence of the PPEGMA grafts on PEEK was revealed by the thermograms from TGA whereas investigations with AFM rejected changes... 8. Surface modelling on heavy atom crystalline compounds: HfO2 and UO2 fluorite structures International Nuclear Information System (INIS) Evarestov, Robert; Bandura, Andrei; Blokhin, Eugeny 2009-01-01 The study of the bulk and surface properties of cubic (fluorite structure) HfO 2 and UO 2 was performed using the hybrid Hartree-Fock density functional theory linear combination of atomic orbitals simulations via the CRYSTAL06 computer code. The Stuttgart small-core pseudopotentials and corresponding basis sets were used for the core-valence interactions. The influence of relativistic effects on the structure and properties of the systems was studied. It was found that surface properties of Mott-Hubbard dielectric UO 2 differ from those found for other metal oxides with the closed-shell configuration of d-electrons 9. Quantum theory of scattering of atoms and diatomic molecules by solid surfaces International Nuclear Information System (INIS) Liu, W.S. 1973-01-01 The unitary treatment, based on standard t-matrix theory, of the quantum theory of scattering of atoms by solid surfaces, is extended to the scattering of particles having internal degrees of freedom by perfect harmonic crystalline surfaces. The diagonal matrix element of the interaction potential which enters into the quantum scattering theory is obtained to represent the potential for the specular beam. From the two-potential formula, the scattering intensities for the diffracted beams and the inelastic beams with or without internal transitions of the particles are obtained by solving the equation for the t-matrix elements. (author) 10. Surface tension effect on the mechanical properties of nanomaterials measured by atomic force microscopy Science.gov (United States) Cuenot, Stéphane; Frétigny, Christian; Demoustier-Champagne, Sophie; Nysten, Bernard 2004-04-01 The effect of reduced size on the elastic properties measured on silver and lead nanowires and on polypyrrole nanotubes with an outer diameter ranging between 30 and 250 nm is presented and discussed. Resonant-contact atomic force microscopy (AFM) is used to measure their apparent elastic modulus. The measured modulus of the nanomaterials with smaller diameters is significantly higher than that of the larger ones. The latter is comparable to the macroscopic modulus of the materials. The increase of the apparent elastic modulus for the smaller diameters is attributed to surface tension effects. The surface tension of the probed material may be experimentally determined from these AFM measurements. 11. Charge-state distribution of MeV He ions scattered from the surface atoms International Nuclear Information System (INIS) Kimura, Kenji; Ohtsuka, Hisashi; Mannami, Michihiko 1993-01-01 The charge-state distribution of 500-keV He ions scattered from a SnTe (001) surface has been investigated using a new technique of high-resolution high-energy ion scattering spectroscopy. The observed charge-state distribution of ions scattered from the topmost atomic layer coincides with that of ions scattered from the subsurface region and does not depend on the incident charge state but depends on the exit angle. The observed exit-angle dependence is explained by a model which includes the charge-exchange process with the valence electrons in the tail of the electron distribution at the surface. (author) 12. In situ AFM investigation of electrochemically induced surface-initiated atom-transfer radical polymerization. Science.gov (United States) Li, Bin; Yu, Bo; Zhou, Feng 2013-02-12 Electrochemically induced surface-initiated atom-transfer radical polymerization is traced by in situ AFM technology for the first time, which allows visualization of the polymer growth process. It affords a fundamental insight into the surface morphology and growth mechanism simultaneously. Using this technique, the polymerization kinetics of two model monomers were studied, namely the anionic 3-sulfopropyl methacrylate potassium salt (SPMA) and the cationic 2-(metharyloyloxy)ethyltrimethylammonium chloride (METAC). The growth of METAC is significantly improved by screening the ammonium cations by the addition of ionic liquid electrolyte in aqueous solution. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. 13. Atomic force microscopic study of the effects of ethanol on yeast cell surface morphology. Science.gov (United States) Canetta, Elisabetta; Adya, Ashok K; Walker, Graeme M 2006-02-01 The detrimental effects of ethanol toxicity on the cell surface morphology of Saccharomyces cerevisiae (strain NCYC 1681) and Schizosaccharomyces pombe (strain DVPB 1354) were investigated using an atomic force microscope (AFM). In combination with culture viability and mean cell volume measurements AFM studies allowed us to relate the cell surface morphological changes, observed on nanometer lateral resolution, with the cellular stress physiology. Exposing yeasts to increasing stressful concentrations of ethanol led to decreased cell viabilities and mean cell volumes. Together with the roughness and bearing volume analyses of the AFM images, the results provided novel insight into the relative ethanol tolerance of S. cerevisiae and Sc. pombe. 14. Structural re-alignment in an immunologic surface region of ricin A chain Energy Technology Data Exchange (ETDEWEB) Zemla, A T; Zhou, C E 2007-07-24 We compared structure alignments generated by several protein structure comparison programs to determine whether existing methods would satisfactorily align residues at a highly conserved position within an immunogenic loop in ribosome inactivating proteins (RIPs). Using default settings, structure alignments generated by several programs (CE, DaliLite, FATCAT, LGA, MAMMOTH, MATRAS, SHEBA, SSM) failed to align the respective conserved residues, although LGA reported correct residue-residue (R-R) correspondences when the beta-carbon (Cb) position was used as the point of reference in the alignment calculations. Further tests using variable points of reference indicated that points distal from the beta carbon along a vector connecting the alpha and beta carbons yielded rigid structural alignments in which residues known to be highly conserved in RIPs were reported as corresponding residues in structural comparisons between ricin A chain, abrin-A, and other RIPs. Results suggest that approaches to structure alignment employing alternate point representations corresponding to side chain position may yield structure alignments that are more consistent with observed conservation of functional surface residues than do standard alignment programs, which apply uniform criteria for alignment (i.e., alpha carbon (Ca) as point of reference) along the entirety of the peptide chain. We present the results of tests that suggest the utility of allowing user-specified points of reference in generating alternate structural alignments, and we present a web server for automatically generating such alignments. 15. Interaction of scandium and titanium atoms with a carbon surface containing five- and seven-membered rings International Nuclear Information System (INIS) Krasnov, P. O.; Eliseeva, N. S.; Kuzubov, A. A. 2012-01-01 The use of carbon nanotubes coated by atoms of transition metals to store molecular hydrogen is associated with the problem of the aggregation of these atoms, which leads to the formation of metal clusters. The quantum-chemical simulation of cluster models of the carbon surface of a graphene type with scandium and titanium atoms has been performed. It has been shown that the presence of five- and seven-membered rings, in addition to six-membered rings, in these structures makes it possible to strongly suppress the processes of the migration of metal atoms over the surface, preventing their clustering. 16. Novel pathways for elimination of chlorine atoms from growing Si(100) surfaces in CVD reactors Science.gov (United States) Kunioshi, Nílson; Hagino, Sho; Fuwa, Akio; Yamaguchi, Katsunori 2018-05-01 Reactions leading to elimination of chlorine atoms from growing Si(100) surfaces were simulated using clusters of silicon atoms of different sizes and shapes, and at the UB3LYP/6-31 g(d,p) level of theory. The reactions of type SiCl2(s) + 2 H2(g), where (s) indicates an adsorbed species at the surface and (g) a gas-phase species, were found to proceed in two steps: SiCl2(s) + H2(g) → SiHCl(s) + HCl(g) and SiHCl(s) + H2(g) → SiH2(s) + HCl(g), each having activation energies around 55 kcal/mol, a value which is comparable to experimental values published in the literature. In addition, the results suggested that H-passivation of Si(100) surfaces support reactions leading to canonical epitaxial growth, providing a plausible explanation for the convenience of passivating the surfaces prior to silicon deposition. The reactions analyzed here can therefore be seen as important steps in the mechanism of epitaxial growth of Si(100) surfaces. 17. Angular distribution of sputtered atoms from Al-Sn alloy and surface topography International Nuclear Information System (INIS) Wang Zhenxia; Pan Jisheng; Zhang Jiping; Tao Zhenlan 1992-01-01 If an alloy is sputtered the angular distribution of the sputtered atoms can be different for each component. At high ion energies in the range of linear cascade theory, different energy distributions for components of different mass in the solid are predicted. Upon leaving the surface, i.e. overcoming the surface binding energy, these differences should show up in different angular distributions. Differences in the angular distribution are of much practical interest, for example, in thin-film deposition by sputtering and surface analysis by secondary-ion mass spectroscopy and Auger electron spectroscopy. Recently our experimental work has shown that for Fe-W alloy the surface microtopography becomes dominant and determines the shape of the angular distribution of the component. However, with the few experimental results available so far it is too early to draw any general conclusions for the angular distribution of the sputtered constituents. Thus, the aim of this work was to study further the influence of the surface topography on the shape of the angular distribution of sputtered atoms from an Al-Sn alloy. (Author) 18. Atomic and molecular oxygen adsorbed on (111) transition metal surfaces: Cu and Ni Energy Technology Data Exchange (ETDEWEB) López-Moreno, S., E-mail: [email protected] [Centro de Investigación en Corrosión, Universidad Autónoma de Campeche, Av. Héroe de Nacozari 480, Campeche, Campeche 24029 (Mexico); Romero, A. H. [Physics Department, West Virginia University, Morgantown, West Virginia 26506-6315 (United States) 2015-04-21 Density functional theory is used to investigate the reaction of oxygen with clean copper and nickel [111]-surfaces. We study several alternative adsorption sites for atomic and molecular oxygen on both surfaces. The minimal energy geometries and adsorption energies are in good agreement with previous theoretical studies and experimental data. From all considered adsorption sites, we found a new O{sub 2} molecular precursor with two possible dissociation paths on the Cu(111) surface. Cross barrier energies for the molecular oxygen dissociation have been calculated by using the climbing image nudge elastic band method, and direct comparison with experimental results is performed. Finally, the structural changes and adsorption energies of oxygen adsorbed on surface when there is a vacancy nearby the adsorption site are also considered. 19. Atomic force microscopy of surface topography of nitrogen plasma treated steel CERN Document Server Mahboubi, F 2002-01-01 Nitriding of steels, using plasma environments has been practiced for many years. A lot of efforts have been put on developing new methods, such as plasma immersion ion implantation (Pl sup 3) and radio frequency (RF) plasma nitriding, for mass transfer of nitrogen into the surface of the work piece. This article presents the results obtained from an in depth investigation of the surface morphology of the treated samples, carried out using an atomic force microscope. Samples from a microalloyed steel, were treated by both methods for 5 hours at different temperatures ranging from 350 to 550 sup d eg sup C in 75% N sub 2 -25% H sub 2 atmosphere. It has been found that the surface of the samples treated by PI sup 3 technique, although having more favorable properties, were rougher than the surfaces treated by RF plasma nitriding. 20. Atomic and molecular oxygen adsorbed on (111) transition metal surfaces: Cu and Ni Science.gov (United States) López-Moreno, S.; Romero, A. H. 2015-04-01 Density functional theory is used to investigate the reaction of oxygen with clean copper and nickel [111]-surfaces. We study several alternative adsorption sites for atomic and molecular oxygen on both surfaces. The minimal energy geometries and adsorption energies are in good agreement with previous theoretical studies and experimental data. From all considered adsorption sites, we found a new O2 molecular precursor with two possible dissociation paths on the Cu(111) surface. Cross barrier energies for the molecular oxygen dissociation have been calculated by using the climbing image nudge elastic band method, and direct comparison with experimental results is performed. Finally, the structural changes and adsorption energies of oxygen adsorbed on surface when there is a vacancy nearby the adsorption site are also considered. 1. Surface diffusion coefficient of Au atoms on single layer graphene grown on Cu Energy Technology Data Exchange (ETDEWEB) Ruffino, F., E-mail: [email protected]; Cacciato, G.; Grimaldi, M. G. [Dipartimento di Fisica ed Astronomia-Universitá di Catania, via S. Sofia 64, 95123 Catania, Italy and MATIS IMM-CNR, via S. Sofia 64, 95123 Catania (Italy) 2014-02-28 A 5 nm thick Au film was deposited on single layer graphene sheets grown on Cu. By thermal processes, the dewetting phenomenon of the Au film on the graphene was induced so to form Au nanoparticles. The mean radius, surface-to-surface distance, and surface density evolution of the nanoparticles on the graphene sheets as a function of the annealing temperature were quantified by scanning electron microscopy analyses. These quantitative data were analyzed within the classical mean-field nucleation theory so to obtain the temperature-dependent Au atoms surface diffusion coefficient on graphene: D{sub S}(T)=[(8.2±0.6)×10{sup −8}]exp[−(0.31±0.02(eV)/(at) )/kT] cm{sup 2}/s. 2. Method for atmospheric pressure reactive atom plasma processing for surface modification Science.gov (United States) Carr, Jeffrey W [Livermore, CA 2009-09-22 Reactive atom plasma processing can be used to shape, polish, planarize and clean the surfaces of difficult materials with minimal subsurface damage. The apparatus and methods use a plasma torch, such as a conventional ICP torch. The workpiece and plasma torch are moved with respect to each other, whether by translating and/or rotating the workpiece, the plasma, or both. The plasma discharge from the torch can be used to shape, planarize, polish, and/or clean the surface of the workpiece, as well as to thin the workpiece. The processing may cause minimal or no damage to the workpiece underneath the surface, and may involve removing material from the surface of the workpiece. 3. Atomic and molecular oxygen adsorbed on (111) transition metal surfaces: Cu and Ni International Nuclear Information System (INIS) López-Moreno, S.; Romero, A. H. 2015-01-01 Density functional theory is used to investigate the reaction of oxygen with clean copper and nickel [111]-surfaces. We study several alternative adsorption sites for atomic and molecular oxygen on both surfaces. The minimal energy geometries and adsorption energies are in good agreement with previous theoretical studies and experimental data. From all considered adsorption sites, we found a new O 2 molecular precursor with two possible dissociation paths on the Cu(111) surface. Cross barrier energies for the molecular oxygen dissociation have been calculated by using the climbing image nudge elastic band method, and direct comparison with experimental results is performed. Finally, the structural changes and adsorption energies of oxygen adsorbed on surface when there is a vacancy nearby the adsorption site are also considered 4. Surface kinetic roughening caused by dental erosion: An atomic force microscopy study Science.gov (United States) Quartarone, Eliana; Mustarelli, Piercarlo; Poggio, Claudio; Lombardini, Marco 2008-05-01 Surface kinetic roughening takes place both in case of growth and erosion processes. Teeth surfaces are eroded by contact with acid drinks, such as those used to supplement mineral salts during sporting activities. Calcium-phosphate based (CPP-ACP) pastes are known to reduce the erosion process, and to favour the enamel remineralization. In this study we used atomic force microscopy (AFM) to investigate the surface roughening during dental erosion, and the mechanisms at the basis of the protection role exerted by a commercial CPP-ACP paste. We found a statistically significant difference (p<0.01) in the roughness of surfaces exposed and not exposed to the acid solutions. The treatment with the CPP-ACP paste determined a statistically significant reduction of the roughness values. By interpreting the AFM results in terms of fractal scaling concepts and continuum stochastic equations, we showed that the protection mechanism of the paste depends on the chemical properties of the acid solution. 5. Deposition of O atomic layers on Si(100) substrates for epitaxial Si-O superlattices: investigation of the surface chemistry Energy Technology Data Exchange (ETDEWEB) Jayachandran, Suseendran, E-mail: [email protected] [KU Leuven, Department of Metallurgy and Materials, Castle Arenberg 44, B-3001 Leuven (Belgium); IMEC, Kapeldreef 75, 3001 Leuven (Belgium); Delabie, Annelies; Billen, Arne [KU Leuven, Department of Chemistry, Celestijnenlaan 200F, B-3001 Leuven (Belgium); IMEC, Kapeldreef 75, 3001 Leuven (Belgium); Dekkers, Harold; Douhard, Bastien; Conard, Thierry; Meersschaut, Johan; Caymax, Matty [IMEC, Kapeldreef 75, 3001 Leuven (Belgium); Vandervorst, Wilfried [KU Leuven, Department of Physics and Astronomy, Celestijnenlaan 200D, B-3001 Leuven (Belgium); IMEC, Kapeldreef 75, 3001 Leuven (Belgium); Heyns, Marc [KU Leuven, Department of Metallurgy and Materials, Castle Arenberg 44, B-3001 Leuven (Belgium); IMEC, Kapeldreef 75, 3001 Leuven (Belgium) 2015-01-01 Highlights: • Atomic layer is deposited by O{sub 3} chemisorption reaction on H-terminated Si(100). • O-content has critical impact on the epitaxial thickness of the above-deposited Si. • Oxygen atoms at dimer/back bond configurations enable epitaxial Si on O atomic layer. • Oxygen atoms at hydroxyl and more back bonds, disable epitaxial Si on O atomic layer. - Abstract: Epitaxial Si-O superlattices consist of alternating periods of crystalline Si layers and atomic layers of oxygen (O) with interesting electronic and optical properties. To understand the fundamentals of Si epitaxy on O atomic layers, we investigate the O surface species that can allow epitaxial Si chemical vapor deposition using silane. The surface reaction of ozone on H-terminated Si(100) is used for the O deposition. The oxygen content is controlled precisely at and near the atomic layer level and has a critical impact on the subsequent Si deposition. There exists only a small window of O-contents, i.e. 0.7–0.9 atomic layers, for which the epitaxial deposition of Si can be realized. At these low O-contents, the O atoms are incorporated in the Si-Si dimers or back bonds (-OSiH), with the surface Si atoms mainly in the 1+ oxidation state, as indicated by infrared spectroscopy. This surface enables epitaxial seeding of Si. For O-contents higher than one atomic layer, the additional O atoms are incorporated in the Si-Si back bonds as well as in the Si-H bonds, where hydroxyl groups (-Si-OH) are created. In this case, the Si deposition thereon becomes completely amorphous. 6. Atomic Scale Structure-Chemistry Relationships at Oxide Catalyst Surfaces and Interfaces Science.gov (United States) McBriarty, Martin E. Oxide catalysts are integral to chemical production, fuel refining, and the removal of environmental pollutants. However, the atomic-scale phenomena which lead to the useful reactive properties of catalyst materials are not sufficiently understood. In this work, the tools of surface and interface science and electronic structure theory are applied to investigate the structure and chemical properties of catalytically active particles and ultrathin films supported on oxide single crystals. These studies focus on structure-property relationships in vanadium oxide, tungsten oxide, and mixed V-W oxides on the surfaces of alpha-Al2O3 and alpha-Fe2O 3 (0001)-oriented single crystal substrates, two materials with nearly identical crystal structures but drastically different chemical properties. In situ synchrotron X-ray standing wave (XSW) measurements are sensitive to changes in the atomic-scale geometry of single crystal model catalyst surfaces through chemical reaction cycles, while X-ray photoelectron spectroscopy (XPS) reveals corresponding chemical changes. Experimental results agree with theoretical calculations of surface structures, allowing for detailed electronic structure investigations and predictions of surface chemical phenomena. The surface configurations and oxidation states of V and W are found to depend on the coverage of each, and reversible structural shifts accompany chemical state changes through reduction-oxidation cycles. Substrate-dependent effects suggest how the choice of oxide support material may affect catalytic behavior. Additionally, the structure and chemistry of W deposited on alpha-Fe 2O3 nanopowders is studied using X-ray absorption fine structure (XAFS) measurements in an attempt to bridge single crystal surface studies with real catalysts. These investigations of catalytically active material surfaces can inform the rational design of new catalysts for more efficient and sustainable chemistry. 7. Diffraction from polarization holographic gratings with surface relief in side-chain azobenzene polyesters DEFF Research Database (Denmark) Naydenova, I; Nikolova, L; Todorov, T 1998-01-01 We investigate the polarization properties of holographic gratings in side-chain azobenzene polyesters in which an anisotropic grating that is due to photoinduced linear and circular birefringence is recorded in the volume of the material and a relief grating appears on the surface. A theoretical...... model is proposed to explain the experimental results, making it possible to understand the influence of the different photoinduced effects. It is shown that at low intensity the polarization properties of the diffraction at these gratings are determined by the interaction of the linear and circular...... photobirefringences, and at larger intensity the influence of the surface relief dominates the effect of the circular anisotropy. Owing to the high recording efficiency of the polyesters, the +/-1-order diffracted waves change the polarization interference pattern during the holographic recording, resulting... 8. Analytic study of the chain dark decomposition reaction of iodides - atomic iodine donors - in the active medium of a pulsed chemical oxygen-iodine laser: 1. Criteria for the development of the branching chain dark decomposition reaction of iodides International Nuclear Information System (INIS) Andreeva, Tamara L; Kuznetsova, S V; Maslov, Aleksandr I; Sorokin, Vadim N 2009-01-01 The scheme of chemical processes proceeding in the active medium of a pulsed chemical oxygen-iodine laser (COIL) is analysed. Based on the analysis performed, the complete system of differential equations corresponding to this scheme is replaced by a simplified system of equations describing in dimensionless variables the chain dark decomposition of iodides - atomic iodine donors, in the COIL active medium. The procedure solving this system is described, the basic parameters determining the development of the chain reaction are found and its specific time intervals are determined. The initial stage of the reaction is analysed and criteria for the development of the branching chain decomposition reaction of iodide in the COIL active medium are determined. (active media) 9. Surface functionalization of quantum dots with fine-structured pH-sensitive phospholipid polymer chains. Science.gov (United States) Liu, Yihua; Inoue, Yuuki; Ishihara, Kazuhiko 2015-11-01 To add novel functionality to quantum dots (QDs), we synthesized water-soluble and pH-responsive block-type polymers by reversible addition-fragmentation chain transfer (RAFT) polymerization. The polymers were composed of cytocompatible 2-methacryloyloxyethyl phosphorylcholine (MPC) polymer segments, which contain a small fraction of active ester groups and can be used to conjugate biologically active compounds to the polymer, and pH-responsive poly(2-(N,N-diethylamino) ethyl methacrylate (DEAEMA)) segments. One terminal of the polymer chain had a hydrophobic alkyl group that originated from the RAFT initiator. This hydrophobic group can bind to the hydrophobic layer on the QD surface. A fluorescent dye was conjugated to the polymer chains via the active ester group. The block-type polymers have an amphiphilic nature in aqueous medium. The polymers were thus easily bound to the QD surface upon evaporation of the solvent from a solution containing the block-type polymer and QDs, yielding QD/fluorescence dye-conjugated polymer hybrid nanoparticles. Fluorescence resonance energy transfer (FRET) between the QDs (donors) and the fluorescent dye molecules (acceptors) was used to obtain information on the conformational dynamics of the immobilized polymers. Higher FRET efficiency of the QD/fluorescent dye-conjugated polymer hybrid nanoparticles was observed at pH 7.4 as compared to pH 5.0 due to a stretching-shrinking conformational motion of the poly(DEAEMA) segments in response to changes in pH. We concluded that the block-type MPC polymer-modified nanoparticles could be used to evaluate the pH of cells via FRET fluorescence based on the cytocompatibility of the MPC polymer. Copyright © 2015 Elsevier B.V. All rights reserved. 10. Ternary hybrid polymeric nanocomposites through grafting of polystyrene on graphene oxide-TiO{sub 2} by surface initiated atom transfer radical polymerization (SI-ATRP) Energy Technology Data Exchange (ETDEWEB) Kumar, Arvind; Bansal, Ankushi; Behera, Babita; Jain, Suman L.; Ray, Siddharth S., E-mail: [email protected] 2016-04-01 A ternary hybrid of graphene oxide-titania-polystyrene (GO-TiO{sub 2}-PS) nanocomposite is developed where polystyrene composition is regulated by controlling growth of polymer chains and nanoarchitectonics is discussed. Graphene Oxide-TiO{sub 2} (GO-TiO{sub 2}) nanocomposite is prepared by in-situ hydrothermal method and the surface is anchored with α-bromoisobutyryl bromide to activate GO-TiO{sub 2} as initiator for polymerization. In-situ grafting of polystyrene through surface initiated atom transfer radical polymerization (SI- ATRP) on this Br-functionalized nano-composite initiator yields GO-TiO{sub 2}-PS ternary hybrid. Varying the monomer amount and keeping the concentration of initiator constant, polystyrene chain growth is regulated with narrow poly-dispersivity to achieve desired composition. This composite is well characterized by various analytical techniques like FTIR, XRD, DSC, SEM, TEM, and TGA. - Highlights: • Nanocomposite of ternary hybrid of GO-TiO{sub 2} with polystyrene. • PS is surface grafted on GO-TiO{sub 2}. • Polymer chain lengths are well regulated by SI-ATRP living polymerization. • Thermal stability of this hybrid is relatively high. 11. Ternary hybrid polymeric nanocomposites through grafting of polystyrene on graphene oxide-TiO_2 by surface initiated atom transfer radical polymerization (SI-ATRP) International Nuclear Information System (INIS) Kumar, Arvind; Bansal, Ankushi; Behera, Babita; Jain, Suman L.; Ray, Siddharth S. 2016-01-01 A ternary hybrid of graphene oxide-titania-polystyrene (GO-TiO_2-PS) nanocomposite is developed where polystyrene composition is regulated by controlling growth of polymer chains and nanoarchitectonics is discussed. Graphene Oxide-TiO_2 (GO-TiO_2) nanocomposite is prepared by in-situ hydrothermal method and the surface is anchored with α-bromoisobutyryl bromide to activate GO-TiO_2 as initiator for polymerization. In-situ grafting of polystyrene through surface initiated atom transfer radical polymerization (SI- ATRP) on this Br-functionalized nano-composite initiator yields GO-TiO_2-PS ternary hybrid. Varying the monomer amount and keeping the concentration of initiator constant, polystyrene chain growth is regulated with narrow poly-dispersivity to achieve desired composition. This composite is well characterized by various analytical techniques like FTIR, XRD, DSC, SEM, TEM, and TGA. - Highlights: • Nanocomposite of ternary hybrid of GO-TiO_2 with polystyrene. • PS is surface grafted on GO-TiO_2. • Polymer chain lengths are well regulated by SI-ATRP living polymerization. • Thermal stability of this hybrid is relatively high. 12. Cell surface expression of single chain antibodies with applications to imaging of gene expression in vivo International Nuclear Information System (INIS) Northrop, Jeffrey P.; Bednarski, Mark; Li, King C.; Barbieri, Susan O.; Lu, Amy T.; Nguyen, Dee; Varadarajan, John; Osen, Maureen; Star-Lack, Josh 2003-01-01 Imaging of gene expression in vivo has many potential uses for biomedical research and drug discovery, ranging from the study of gene regulation and cancer to the non-invasive assessment of gene therapies. To streamline the development of imaging marker gene technologies for nuclear medicine, we propose a new approach to the design of reporter/probe pairs wherein the reporter is a cell surface-expressed single chain antibody variable fragment that has been raised against a low molecular weight imaging probe with optimized pharmacokinetic properties. Proof of concept of the approach was achieved using a single chain antibody variable fragment that binds with high affinity to fluorescein and an imaging probe consisting of fluorescein isothiocyanate coupled to the chelator diethylene triamine penta-acetic acid labeled with the gamma-emitter 111 In. We demonstrate specific high-affinity binding of this probe to the cell surface-expressed reporter in vitro and assess the in vivo biodistribution of the probe both in wild-type mice and in mice harboring tumor xenografts expressing the reporter. Specific uptake of the probe by, and in vivo imaging of, tumors expressing the reporter are shown. Since ScFvs with high affinities can be raised to almost any protein or small molecule, the proposed methodology may offer a new flexibility in the design of imaging tracer/reporter pairs wherein both probe pharmacokinetics and binding affinities can be readily optimized. (orig.) 13. Surface effects on sputtered atoms and their angular and energy dependence International Nuclear Information System (INIS) Hassanein, A.M. 1985-04-01 A comprehensive three-dimensional Monte Carlo computer code, Ion Transport in Materials and Compounds (ITMC), has been developed to study in detail the surfaces related phenomena that affect the amount of sputtered atoms and back-scattered ions and their angular and energy dependence. A number of important factors that can significantly affect the sputtering behavior of a surface can be studied in detail, such as having different surface properties and composition than the bulk and synergistic effects due to surface segregation of alloys. These factors can be important in determining and lifetime of fusion reactor first walls and limiters. The ITMC Code is based on Monte Carlo methods to track down the path and the damage produced by charged particles as they slow down in solid metal surfaces or compounds. The major advantages of the ITMC code are its flexibility and ability to use and compare all existing models for energy losses, all known interatomic potentials, and to use different materials and compounds with different surface and bulk composition to allow for dynamic surface composition to allow for dynamic surface composition changes. There is good agreement between the code and available experimental results without using adjusting parameters for the energy losses mechanisms. The ITMC Code is highly optimized, very fast to run and easy to use 14. Electronic spectral properties of surfaces and adsorbates and atom-adsorbate van der Waals interactions International Nuclear Information System (INIS) Lovric, D.; Gumhalter, B. 1988-01-01 The relevance of van der Waals interactions in the scattering of neutral atoms from adsorbates has been recently confirmed by highly sensitive molecular-beam techniques. The theoretical descriptions of the collision dynamics which followed the experimental studies have necessitated very careful qualitative and quantitative examinations and evaluations of the properties of atom-adsorbate van der Waals interactions for specific systems. In this work we present a microscopic calculation of the strengths and reference-plane positions for van der Waals potentials relevant for scattering of He atoms from CO adsorbed on various metallic substrates. In order to take into account the specificities of the polarization properties of real metals (noble and transition metals) and of chemisorbed CO, we first calculate the spectra of the electronic excitations characteristic of the respective electronic subsystems by using various data sources available and combine them with the existing theoretical models. The reliability of the calculated spectra is then verified in each particular case by universal sum rules which may be established for the electronic excitations of surfaces and adsorbates. The substrate and adsorbate polarization properties which derive from these calculations serve as input data for the evaluation of the strengths and reference-plane positions of van der Waals potentials whose computed values are tabulated for a number of real chemisorption systems. The implications of the obtained results are discussed in regard to the atom-adsorbate scattering cross sections pertinent to molecular-beam scattering experiments 15. Self-diffusion dynamic behavior of atomic clusters on Re(0 0 0 1) surface Energy Technology Data Exchange (ETDEWEB) Liu Fusheng [Department of Applied Physics, Hunan University, Changsha 410082 (China); Hu Wangyu, E-mail: [email protected] [Department of Applied Physics, Hunan University, Changsha 410082 (China); Deng Huiqiu; Luo Wenhua; Xiao Shifang [Department of Applied Physics, Hunan University, Changsha 410082 (China); Yang Jianyu [Department of Maths and Physics, Hunan Institute of Engineering, Xiangtan 411104 (China) 2009-08-15 Using molecular dynamics simulations and a modified analytic embedded atom potential, the self-diffusion dynamics of rhenium atomic clusters up to seven atoms on Re(0 0 0 1) surface have been studied in the temperature ranges from 600 K to 1900 K. The simulation time varies from 20 ns to 200 ns according to the cluster sizes and the temperature. The heptamer and trimer are more stable comparing to other neighboring non-compact clusters. The diffusion coefficients of clusters are derived from the mean square displacement of cluster's mass-center, and diffusion prefactors D{sub 0} and activation energies E{sub a} are derived from the Arrhenius relation. It is found that the Arrhenius relation of the adatom can be divided into two parts at different temperature range. The activation energy of clusters increases with the increasing of the atom number in clusters. The prefactor of the heptamer is 2-3 orders of magnitude higher than a usual prefactor because of a large number of nonequivalent diffusion processes. The trimer and heptamer are the nuclei at different temperature range according to the nucleation theory. 16. Adsorption behavior of Fe atoms on a naphthalocyanine monolayer on Ag(111) surface Energy Technology Data Exchange (ETDEWEB) Yan, Linghao; Wu, Rongting; Bao, Deliang; Ren, Junhai; Zhang, Yanfang; Zhang, Haigang; Huang, Li; Wang, Yeliang; Du, Shixuan; Huan, Qing; Gao, Hong-Jun 2015-05-29 Adsorption behavior of Fe atoms on a metal-free naphthalocyanine (H2Nc) monolayer on Ag(111) surface at room temperature has been investigated using scanning tunneling microscopy combined with density functional theory (DFT) based calculations. We found that the Fe atoms adsorbed at the centers of H2Nc molecules and formed Fe-H2Nc complexes at low coverage. DFT calculations show that the configuration of Fe at the center of a molecule is the most stable site, in good agreement with the experimental observations. After an Fe-H2Nc complex monolayer was formed, the extra Fe atoms self-assembled to Fe clusters of uniform size and adsorbed dispersively at the interstitial positions of Fe-H2Nc complex monolayer. Furthermore, the H2Nc monolayer grown on Ag(111) could be a good template to grow dispersed magnetic metal atoms and clusters at room temperature for further investigation of their magnetism-related properties. 17. The surface and interior evolution of Ceres revealed by fractures and secondary crater chains Science.gov (United States) Scully, Jennifer E. C.; Buczkowski, Debra; Schmedemann, Nico; King, Scott; O'Brien, David P.; Castillo-Rogez, Julie; Raymond, Carol; Marchi, Simone; Russell, Christopher T.; Mitri, Giuseppe; Bland, Michael T. 2016-10-01 Dawn became the first spacecraft to visit and orbit Ceres, a dwarf planet and the largest body in the asteroid belt (radius ~470 km) (Russell et al., 2016). Before Dawn's arrival, telescopic observations and thermal evolution modeling indicated Ceres was differentiated, with an average density of 2,100 kg/m3 (e.g. McCord & Sotin, 2005; Castillo-Rogez & McCord, 2010). Moreover, pervasive viscous relaxation in a water-ice-rich outer layer was predicted to erase most features on Ceres' surface (Bland, 2013). However, a full understanding of Ceres' surface and interior evolution remained elusive. On the basis of global geologic mapping, we identify prevalent ≥1 km wide linear features that formed: 1) as the surface expression of subsurface fractures, and 2) as material ejected during impact-crater formation impacted and scoured the surface, forming secondary crater chains. The formation and preservation of these linear features indicates Ceres' outer layer is relatively strong, and is not dominated by viscous relaxation as predicted. The fractures also give us insights into Ceres' interior: their spacing indicates the fractured layer is ~30 km thick, and we interpret the fractures formed because of uplift and extension induced by an upwelling region, which is consistent with geodynamic modeling (King et al., 2016). In addition, we find that some secondary crater chains do not form radial patterns around their source impact craters, and are located in a different hemisphere from their source impact craters, because of Ceres' fast rotation (period of ~9 hours) and relatively small radius. Our results show Ceres has a surface and outer layer with characteristics that are different than predicted, and underwent complex surface and interior evolution. Our fuller understanding of Ceres, based on Dawn data, gives us important insights into the evolution of bodies in the asteroid belt, and provides unique constraints that can be used to evaluate predictions of the surface 18. Size Effects on Surface Elastic Waves in a Semi-Infinite Medium with Atomic Defect Generation Directory of Open Access Journals (Sweden) 2013-01-01 Full Text Available The paper investigates small-scale effects on the Rayleigh-type surface wave propagation in an isotopic elastic half-space upon laser irradiation. Based on Eringen’s theory of nonlocal continuum mechanics, the basic equations of wave motion and laser-induced atomic defect dynamics are derived. Dispersion equation that governs the Rayleigh surface waves in the considered medium is derived and analyzed. Explicit expressions for phase velocity and attenuation (amplification coefficients which characterize surface waves are obtained. It is shown that if the generation rate is above the critical value, due to concentration-elastic instability, nanometer sized ordered concentration-strain structures on the surface or volume of solids arise. The spatial scale of these structures is proportional to the characteristic length of defect-atom interaction and increases with the increase of the temperature of the medium. The critical value of the pump parameter is directly proportional to recombination rate and inversely proportional to deformational potentials of defects. 19. Method for controlling a coolant liquid surface of cooling system instruments in an atomic power plant International Nuclear Information System (INIS) Monta, Kazuo. 1974-01-01 Object: To prevent coolant inventory within a cooling system loop in an atomic power plant from being varied depending on loads thereby relieving restriction of varied speed of coolant flow rate to lowering of a liquid surface due to short in coolant. Structure: Instruments such as a superheater, an evaporator, and the like, which constitute a cooling system loop in an atomic power plant, have a plurality of free liquid surface of coolant. Portions whose liquid surface is controlled and portions whose liquid surface is varied are adjusted in cross-sectional area so that the sum total of variation in coolant inventory in an instrument such as a superheater provided with an annulus portion in the center thereof and an inner cylindrical portion and a down-comer in the side thereof comes equal to that of variation in coolant inventory in an instrument such as an evaporator similar to the superheater. which is provided with an overflow pipe in its inner cylindrical portion or down-comer, thereby minimizing variation in coolant inventory of the entire coolant due to loads thus minimizing variation in varied speed of the coolant. (Kamimura, M.) 20. Surface modification of highly oriented pyrolytic graphite by reaction with atomic nitrogen at high temperatures International Nuclear Information System (INIS) Zhang Luning; Pejakovic, Dusan A.; Geng Baisong; Marschall, Jochen 2011-01-01 Dry etching of {0 0 0 1} basal planes of highly oriented pyrolytic graphite (HOPG) using active nitridation by nitrogen atoms was investigated at low pressures and high temperatures. The etching process produces channels at grain boundaries and pits whose shapes depend on the reaction temperature. For temperatures below 600 deg. C, the majority of pits are nearly circular, with a small fraction of hexagonal pits with rounded edges. For temperatures above 600 deg. C, the pits are almost exclusively hexagonal with straight edges. The Raman spectra of samples etched at 1000 deg. C show the D mode near 1360 cm -1 , which is absent in pristine HOPG. For deep hexagonal pits that penetrate many graphene layers, neither the surface number density of pits nor the width of pit size distribution changes substantially with the nitridation time, suggesting that these pits are initiated at a fixed number of extended defects intersecting {0 0 0 1} planes. Shallow pits that penetrate 1-2 graphene layers have a wide size distribution, which suggests that these pits are initiated on pristine graphene surfaces from lattice vacancies continually formed by N atoms. A similar wide size distribution of shallow hexagonal pits is observed in an n-layer graphene sample after N-atom etching. 1. On the possibility of study the surface structure of small bio-objects, including fragments of nucleotide chains, by means of electron interference Energy Technology Data Exchange (ETDEWEB) Namiot, V.A., E-mail: [email protected] [Institute of Nuclear Physics, Moscow State University, Vorobyovy Gory, 119992 Moscow (Russian Federation) 2009-07-20 We propose a new method to study the surface of small bio-objects, including macromolecules and their complexes. This method is based on interference of low-energy electrons. Theoretically, this type of interference may allow to construct a hologram of the biological object, but, unlike an optical hologram, with the spatial resolution of the order of inter-atomic distances. The method provides a possibility to construct a series of such holograms at various levels of electron energies. In theory, obtaining such information would be enough to identify the types of molecular groups existing on the surface of the studied object. This method could also be used for 'fast reading' of nucleotide chains. It has been shown how to depose a long linear molecule as a straight line on a substrate before carrying out such 'reading'. 2. Low temperature removal of surface oxides and hydrocarbons from Ge(100) using atomic hydrogen Energy Technology Data Exchange (ETDEWEB) Walker, M., E-mail: [email protected]; Tedder, M.S.; Palmer, J.D.; Mudd, J.J.; McConville, C.F. 2016-08-30 Highlights: • Preparation of a clean, well-ordered Ge(100) surface with atomic hydrogen. • Surface oxide layers removed by AHC at room temperature, but not hydrocarbons. • Increasing surface temperature during AHC dramatically improves efficiency. • AHC with the surface heated to 250 °C led to a near complete removal of contaminants. • (2 × 1) LEED pattern from IBA and AHC indicates asymmetric dimer reconstruction. - Abstract: Germanium is a group IV semiconductor with many current and potential applications in the modern semiconductor industry. Key to expanding the use of Ge is a reliable method for the removal of surface contamination, including oxides which are naturally formed during the exposure of Ge thin films to atmospheric conditions. A process for achieving this task at lower temperatures would be highly advantageous, where the underlying device architecture will not diffuse through the Ge film while also avoiding electronic damage induced by ion irradiation. Atomic hydrogen cleaning (AHC) offers a low-temperature, damage-free alternative to the common ion bombardment and annealing (IBA) technique which is widely employed. In this work, we demonstrate with X-ray photoelectron spectroscopy (XPS) that the AHC method is effective in removing surface oxides and hydrocarbons, yielding an almost completely clean surface when the AHC is conducted at a temperature of 250 °C. We compare the post-AHC cleanliness and (2 × 1) low energy electron diffraction (LEED) pattern to that obtained via IBA, where the sample is annealed at 600 °C. We also demonstrate that the combination of a sample temperature of 250 °C and atomic H dosing is required to clean the surface. Lower temperatures prove less effective in removal of the oxide layer and hydrocarbons, whilst annealing in ultra-high vacuum conditions only removes weakly bound hydrocarbons. Finally, we examine the subsequent H-termination of an IBA-cleaned sample using XPS, LEED and ultraviolet 3. Permutation invariant polynomial neural network approach to fitting potential energy surfaces. II. Four-atom systems Energy Technology Data Exchange (ETDEWEB) Li, Jun; Jiang, Bin; Guo, Hua, E-mail: [email protected] [Department of Chemistry and Chemical Biology, University of New Mexico, Albuquerque, New Mexico 87131 (United States) 2013-11-28 A rigorous, general, and simple method to fit global and permutation invariant potential energy surfaces (PESs) using neural networks (NNs) is discussed. This so-called permutation invariant polynomial neural network (PIP-NN) method imposes permutation symmetry by using in its input a set of symmetry functions based on PIPs. For systems with more than three atoms, it is shown that the number of symmetry functions in the input vector needs to be larger than the number of internal coordinates in order to include both the primary and secondary invariant polynomials. This PIP-NN method is successfully demonstrated in three atom-triatomic reactive systems, resulting in full-dimensional global PESs with average errors on the order of meV. These PESs are used in full-dimensional quantum dynamical calculations. 4. On the theory of diffraction of Maxwellian atomic beams by solid surfaces International Nuclear Information System (INIS) Goodman, F.O. 1976-01-01 In the context of diffraction of Maxwellian (thermal) atomic beams by solid surfaces, the usual assumption that the angular position of the maximum in a diffracted beam corresponds to the diffraction angle of atoms with the most probable de Broglie wavelength is examined, and compared with other possible criteria and with the correct result. It is concluded that, although this criterion may be the best simple one available, it is certainly bad in some situations; the reasons why, and the conditions under which, it is expected to be good are discussed. Also, it is shown that considerable care must be taken when shapes of diffracted beams and when angular positions of their maxima are calculated, because certain physical effects (which are always present) may change these shapes and positions in unexpected ways. The theory is compared with two sets of relatively modern experimental data, one set for which the fit is good, and another set for which a fit is impossible 5. Self-cleaning and surface chemical reactions during hafnium dioxide atomic layer deposition on indium arsenide. Science.gov (United States) Timm, Rainer; Head, Ashley R; Yngman, Sofie; Knutsson, Johan V; Hjort, Martin; McKibbin, Sarah R; Troian, Andrea; Persson, Olof; Urpelainen, Samuli; Knudsen, Jan; Schnadt, Joachim; Mikkelsen, Anders 2018-04-12 Atomic layer deposition (ALD) enables the ultrathin high-quality oxide layers that are central to all modern metal-oxide-semiconductor circuits. Crucial to achieving superior device performance are the chemical reactions during the first deposition cycle, which could ultimately result in atomic-scale perfection of the semiconductor-oxide interface. Here, we directly observe the chemical reactions at the surface during the first cycle of hafnium dioxide deposition on indium arsenide under realistic synthesis conditions using photoelectron spectroscopy. We find that the widely used ligand exchange model of the ALD process for the removal of native oxide on the semiconductor and the simultaneous formation of the first hafnium dioxide layer must be significantly revised. Our study provides substantial evidence that the efficiency of the self-cleaning process and the quality of the resulting semiconductor-oxide interface can be controlled by the molecular adsorption process of the ALD precursors, rather than the subsequent oxide formation. 6. Quasi-elastic helium-atom scattering from surfaces: experiment and interpretation International Nuclear Information System (INIS) Jardine, A.P.; Ellis, J.; Allison, W. 2002-01-01 Diffusion of an adsorbate is affected both by the adiabatic potential energy surface in which the adsorbate moves and by the rate of thermal coupling between the adsorbate and substrate. In principle both factors are amenable to investigation through quasi-elastic broadening in the energy spread of a probing beam of helium atoms. This review provides a topical summary of both the quasi-elastic helium-atom scattering technique and the available data in relation to the determination of diffusion parameters. In particular, we discuss the activation barriers deduced from experiment and their relation to the adiabatic potential and the central role played by the friction parameter, using the CO/Cu(001) system as a case study. The main issues to emerge are the need for detailed molecular dynamics simulations in the interpretation of data and the desirability of significantly greater energy resolution in the experiments themselves. (author) 7. Ultrafast terahertz control of extreme tunnel currents through single atoms on a silicon surface DEFF Research Database (Denmark) Jelic, Vedran; Iwaszczuk, Krzysztof; Nguyen, Peter H. 2017-01-01 scanning tunnelling microscopy (THz-STM) in ultrahigh vacuum as a new platform for exploring ultrafast non-equilibrium tunnelling dynamics with atomic precision. Extreme terahertz-pulse-driven tunnel currents up to 10(7) times larger than steady-state currents in conventional STM are used to image...... terahertz-induced band bending and non-equilibrium charging of surface states opens new conduction pathways to the bulk, enabling extreme transient tunnel currents to flow between the tip and sample.......Ultrafast control of current on the atomic scale is essential for future innovations in nanoelectronics. Extremely localized transient electric fields on the nanoscale can be achieved by coupling picosecond duration terahertz pulses to metallic nanostructures. Here, we demonstrate terahertz... 8. Influence of the atomic force microscope tip on the multifractal analysis of rough surfaces International Nuclear Information System (INIS) Klapetek, Petr; Ohlidal, Ivan; Bilek, Jindrich 2004-01-01 In this paper, the influence of atomic force microscope tip on the multifractal analysis of rough surfaces is discussed. This analysis is based on two methods, i.e. on the correlation function method and the wavelet transform modulus maxima method. The principles of both methods are briefly described. Both methods are applied to simulated rough surfaces (simulation is performed by the spectral synthesis method). It is shown that the finite dimensions of the microscope tip misrepresent the values of the quantities expressing the multifractal analysis of rough surfaces within both the methods. Thus, it was concretely shown that the influence of the finite dimensions of the microscope tip changed mono-fractal properties of simulated rough surface to multifractal ones. Further, it is shown that a surface reconstruction method developed for removing the negative influence of the microscope tip does not improve the results obtained in a substantial way. The theoretical procedures concerning both the methods, i.e. the correlation function method and the wavelet transform modulus maxima method, are illustrated for the multifractal analysis of randomly rough gallium arsenide surfaces prepared by means of the thermal oxidation of smooth gallium arsenide surfaces and subsequent dissolution of the oxide films 9. Nonlinear dynamic response of cantilever beam tip during atomic force microscopy (AFM) nanolithography of copper surface International Nuclear Information System (INIS) Yeh, Y-L; Jang, M-J; Wang, C-C; Lin, Y-P; Chen, K-S 2008-01-01 This paper investigates the nonlinear dynamic response of an atomic force microscope (AFM) cantilever beam tip during the nanolithography of a copper (Cu) surface using a high-depth feed. The dynamic motion of the tip is modeled using a combined approach based on Newton's law and empirical observations. The cutting force is determined from experimental observations of the piling height on the Cu surface and the rotation angle of the cantilever beam tip. It is found that the piling height increases linearly with the cantilever beam carrier velocity. Furthermore, the cantilever beam tip is found to execute a saw tooth motion. Both this motion and the shear cutting force are nonlinear. The elastic modulus in the y direction is variable. Finally, the velocity of the cantilever beam tip as it traverses the specimen surface has a discrete characteristic rather than a smooth, continuous profile 10. Direct observation of deformation of nafion surfaces induced by methanol treatment by using atomic force microscopy International Nuclear Information System (INIS) Umemura, Kazuo; Kuroda, Reiko; Gao Yanfeng; Nagai, Masayuki; Maeda, Yuta 2008-01-01 We successfully characterized the effect of methanol treatment on the nanoscopic structures of a nafion film, which is widely used in direct methanol fuel cells (DMFCs). Atomic force microscopy (AFM) was used to repetitively image a particular region of a nafion sample before and after methanol solutions were dropped onto the nafion film and dried in air. When the surface was treated with 20% methanol for 5 min, many nanopores appeared on the surface. The number of nanopores increased when the sample was treated twice or thrice. By repetitive AFM imaging of a particular region of the same sample, we found that the shapes of the nanopores were deformed by the repeated methanol treatment, although the size of the nanopores had not significantly changed. The creation of the nanopores was affected by the concentration of methanol. Our results directly visualized the effects of methanol treatment on the surface structures of a nafion film at nanoscale levels for the first time 11. Reconstruction of the Tip-Surface Interaction Potential by Analysis of the Brownian Motion of an Atomic Force Microscope Tip NARCIS (Netherlands) Willemsen, O.H.; Kuipers, L.; van der Werf, Kees; de Grooth, B.G.; Greve, Jan 2000-01-01 The thermal movement of an atomic force microscope (AFM) tip is used to reconstruct the tip-surface interaction potential. If a tip is brought into the vicinity of a surface, its movement is governed by the sum of the harmonic cantilever potential and the tip-surface interaction potential. By 12. A Monte Carlo simulation of the exchange reaction between gaseous molecules and the atoms on a heterogeneous solid surface International Nuclear Information System (INIS) Imai, Hisao 1980-01-01 A method of the Monte Carlo simulation of the isotopic exchange reaction between gaseous molecules and the atoms on an arbitrarily heterogeneous solid surface is described by employing hydrogen as an example. (author) 13. Surface modification of nanodiamond through metal free atom transfer radical polymerization Energy Technology Data Exchange (ETDEWEB) Zeng, Guangjian; Liu, Meiying; Shi, Kexin; Heng, Chunning; Mao, Liucheng; Wan, Qing; Huang, Hongye [Department of Chemistry, Nanchang University, 999 Xuefu Avenue, Nanchang 330031 (China); Deng, Fengjie, E-mail: [email protected] [Department of Chemistry, Nanchang University, 999 Xuefu Avenue, Nanchang 330031 (China); Zhang, Xiaoyong, E-mail: [email protected] [Department of Chemistry, Nanchang University, 999 Xuefu Avenue, Nanchang 330031 (China); Wei, Yen, E-mail: [email protected] [Department of Chemistry and the Tsinghua Center for Frontier Polymer Research, Tsinghua University, Beijing, 100084 (China) 2016-12-30 Highlights: • Surface modification of ND with water soluble and biocompatible polymers. • Functionalized ND through metal free surface initiated ATRP. • The metal free surface initiated ATRP is rather simple and effective. • The ND-poly(MPC) showed high dispersibility and desirable biocompatibility. - Abstract: Surface modification of nanodiamond (ND) with poly(2-methacryloyloxyethyl phosphorylcholine) [poly(MPC)] has been achieved by using metal free surface initiated atom transfer radical polymerization (SI-ATRP). The ATRP initiator was first immobilized on the surface of ND through direct esterification reaction between hydroxyl group of ND and 2-bromoisobutyryl bromide. The initiator could be employed to obtain ND-poly(MPC) nanocomposites through SI-ATRP using an organic catalyst. The final functional materials were characterized by {sup 1}H nuclear magnetic resonance, transmission electron microscopy, X-ray photoelectron spectroscopy, Fourier transform infrared spectroscopy and thermo gravimetric analysis in detailed. All of these characterization results demonstrated that ND-poly(MPC) have been successfully obtained via metal free photo-initiated SI-ATRP. The ND-poly(MPC) nanocomposites shown enhanced dispersibility in various solvents as well as excellent biocompatibility. As compared with traditional ATRP, the metal free ATRP is rather simple and effective. More importantly, this preparation method avoided the negative influence of metal catalysts. Therefore, the method described in this work should be a promising strategy for fabrication of polymeric nanocomposites with great potential for different applications especially in biomedical fields. 14. Following the surface response of caffeine cocrystals to controlled humidity storage by atomic force microscopy. Science.gov (United States) Cassidy, A M C; Gardner, C E; Jones, W 2009-09-08 Active pharmaceutical ingredient (API) stability in solid state tablet formulation is frequently a function of the relative humidity (RH) environment in which the drug is stored. Caffeine is one such problematic API. Previously reported caffeine cocrystals, however, were found to offer increased resistance to caffeine hydrate formation. Here we report on the use of atomic force microscopy (AFM) to image the surface of two caffeine cocrystal systems to look for differences between the surface and bulk response of the cocrystal to storage in controlled humidity environments. Bulk responses have previously been assessed by powder X-ray diffraction. With AFM, pinning sites were identified at step edges on caffeine/oxalic acid, with these sites leading to non-uniform step movement on going from ambient to 0% RH. At RH >75%, areas of fresh crystal growth were seen on the cocrystal surface. In the case of caffeine/malonic acid the cocrystals were observed to absorb water anisotropically after storage at 75% RH for 2 days, affecting the surface topography of the cocrystal. These results show that AFM expands on the data gathered by bulk analytical techniques, such as powder X-ray diffraction, by providing localised surface information. This surface information may be important for better predicting API stability in isolation and at a solid state API-excipient interface. 15. Surface modification of nanodiamond through metal free atom transfer radical polymerization International Nuclear Information System (INIS) Zeng, Guangjian; Liu, Meiying; Shi, Kexin; Heng, Chunning; Mao, Liucheng; Wan, Qing; Huang, Hongye; Deng, Fengjie; Zhang, Xiaoyong; Wei, Yen 2016-01-01 Highlights: • Surface modification of ND with water soluble and biocompatible polymers. • Functionalized ND through metal free surface initiated ATRP. • The metal free surface initiated ATRP is rather simple and effective. • The ND-poly(MPC) showed high dispersibility and desirable biocompatibility. - Abstract: Surface modification of nanodiamond (ND) with poly(2-methacryloyloxyethyl phosphorylcholine) [poly(MPC)] has been achieved by using metal free surface initiated atom transfer radical polymerization (SI-ATRP). The ATRP initiator was first immobilized on the surface of ND through direct esterification reaction between hydroxyl group of ND and 2-bromoisobutyryl bromide. The initiator could be employed to obtain ND-poly(MPC) nanocomposites through SI-ATRP using an organic catalyst. The final functional materials were characterized by 1 H nuclear magnetic resonance, transmission electron microscopy, X-ray photoelectron spectroscopy, Fourier transform infrared spectroscopy and thermo gravimetric analysis in detailed. All of these characterization results demonstrated that ND-poly(MPC) have been successfully obtained via metal free photo-initiated SI-ATRP. The ND-poly(MPC) nanocomposites shown enhanced dispersibility in various solvents as well as excellent biocompatibility. As compared with traditional ATRP, the metal free ATRP is rather simple and effective. More importantly, this preparation method avoided the negative influence of metal catalysts. Therefore, the method described in this work should be a promising strategy for fabrication of polymeric nanocomposites with great potential for different applications especially in biomedical fields. 16. Synthesis of ZnS nanoparticles on a solid surface: Atomic force microscopy study International Nuclear Information System (INIS) Yuan Huizhen; Lian Wenping; Song Yonghai; Chen Shouhui; Chen Lili; Wang Li 2010-01-01 In this work, zinc sulfide (ZnS) nanoparticles had been synthesized on DNA network/mica and mica surface, respectively. The synthesis was carried out by first dropping a mixture of zinc acetate and DNA on a mica surface for the formation of the DNA networks or zinc acetate solution on a mica surface, and subsequently transferring the sample into a heated thiourea solution. The Zn 2+ adsorbed on DNA network/mica or mica surface would react with S 2- produced from thiourea and form ZnS nanoparticles on these surfaces. X-ray diffraction and atomic force microscopy (AFM) were used to characterize the ZnS nanoparticles in detail. AFM results showed that ZnS nanoparticles distributed uniformly on the mica surface and deposited preferentially on DNA networks. It was also found that the size and density of ZnS nanoparticles could be effectively controlled by adjusting reaction temperature and the concentration of Zn 2+ or DNA. The possible growth mechanisms have been discussed in detail. 17. In situ measurement of fixed charge evolution at silicon surfaces during atomic layer deposition International Nuclear Information System (INIS) Ju, Ling; Watt, Morgan R.; Strandwitz, Nicholas C. 2015-01-01 Interfacial fixed charge or interfacial dipoles are present at many semiconductor-dielectric interfaces and have important effects upon device behavior, yet the chemical origins of these electrostatic phenomena are not fully understood. We report the measurement of changes in Si channel conduction in situ during atomic layer deposition (ALD) of aluminum oxide using trimethylaluminum and water to probe changes in surface electrostatics. Current-voltage data were acquired continually before, during, and after the self-limiting chemical reactions that result in film growth. Our measurements indicated an increase in conductance on p-type samples with p + ohmic contacts and a decrease in conductance on analogous n-type samples. Further, p + contacted samples with n-type channels exhibited an increase in measured current and n + contacted p-type samples exhibited a decrease in current under applied voltage. Device physics simulations, where a fixed surface charge was parameterized on the channel surface, connect the surface charge to changes in current-voltage behavior. The simulations and analogous analytical relationships for near-surface conductance were used to explain the experimental results. Specifically, the changes in current-voltage behavior can be attributed to the formation of a fixed negative charge or the modification of a surface dipole upon chemisorption of trimethylaluminum. These measurements allow for the observation of fixed charge or dipole formation during ALD and provide further insight into the electrostatic behavior at semiconductor-dielectric interfaces during film nucleation 18. Simple preparation of thiol-ene particles in glycerol and surface functionalization by thiol-ene chemistry (TEC) and surface chain transfer free radical polymerization (SCT-FRP) DEFF Research Database (Denmark) Hoffmann, Christian; Chiaula, Valeria; Yu, Liyun 2018-01-01 functionalization of excess thiol groups via photochemical thiol-ene chemistry (TEC) resulting in a functional monolayer. In addition, surface chain transfer free radical polymerization (SCT-FRP) was used for the first time to introduce a thicker polymer layer on the particle surface. The application potential... 19. The conditions for total reflection of low-energy atoms from crystal surfaces International Nuclear Information System (INIS) Hou, M.; Robinson, M.T. 1978-01-01 The critical angles for the total reflection of low-energy particles from Cu rows and (001) planes have been investigated, using the binary collision approximation computer simulation code MARLOWE Breakthrough angles were evaluated for H, N, Ne, Ar, Cu, Xe, and Au in the energy range from 0.1 to 7.5 keV. In both the axial and the planar cases, recoiling of the target atoms lowers the energy barrier which the target surface presents to the heavy projectiles. Consequently, the breakthrough angles are reduced for heavy projectiles below the values expected either from observations on light projectiles or from analytical channeling theory. (orig.) [de 20. Correlating yeast cell stress physiology to changes in the cell surface morphology: atomic force microscopic studies. Science.gov (United States) Canetta, Elisabetta; Walker, Graeme M; Adya, Ashok K 2006-07-06 Atomic Force Microscopy (AFM) has emerged as a powerful biophysical tool in biotechnology and medicine to investigate the morphological, physical, and mechanical properties of yeasts and other biological systems. However, properties such as, yeasts' response to environmental stresses, metabolic activities of pathogenic yeasts, cell-cell/cell-substrate adhesion, and cell-flocculation have rarely been investigated so far by using biophysical tools. Our recent results obtained by AFM on one strain each of Saccharomyces cerevisiae and Schizosaccharomyces pombe show a clear correlation between the physiology of environmentally stressed yeasts and the changes in their surface morphology. The future directions of the AFM related techniques in relation to yeasts are also discussed. 1. Atomic force microscopy-based repeated machining theory for nanochannels on silicon oxide surfaces Energy Technology Data Exchange (ETDEWEB) Wang, Z.Q., E-mail: [email protected] [State Key Laboratory of Robotics, Shenyang Institute of Automation, CAS, Shenyang 110016 (China); Graduate University of the Chinese Academy of Sciences, Beijing 100049 (China); Jiao, N.D. [State Key Laboratory of Robotics, Shenyang Institute of Automation, CAS, Shenyang 110016 (China); Tung, S. [Department of Mechanical Engineering, University of Arkansas, Fayetteville, AR 72701 (United States); Dong, Z.L. [State Key Laboratory of Robotics, Shenyang Institute of Automation, CAS, Shenyang 110016 (China) 2011-02-01 The atomic force microscopy (AFM)-based repeated nanomachining of nanochannels on silicon oxide surfaces is investigated both theoretically and experimentally. The relationships of the initial nanochannel depth vs. final nanochannel depth at a normal force are systematically studied. Using the derived theory and simulation results, the final nanochannel depth can be predicted easily. Meanwhile, if a nanochannel with an expected depth needs to be machined, a right normal force can be selected simply and easily in order to decrease the wear of the AFM tip. The theoretical analysis and simulation results can be effectively used for AFM-based fabrication of nanochannels. 2. Osmium Atoms and Os2 Molecules Move Faster on Selenium-Doped Compared to Sulfur-Doped Boronic Graphenic Surfaces. Science.gov (United States) Barry, Nicolas P E; Pitto-Barry, Anaïs; Tran, Johanna; Spencer, Simon E F; Johansen, Adam M; Sanchez, Ana M; Dove, Andrew P; O'Reilly, Rachel K; Deeth, Robert J; Beanland, Richard; Sadler, Peter J 2015-07-28 We deposited Os atoms on S- and Se-doped boronic graphenic surfaces by electron bombardment of micelles containing 16e complexes [Os(p-cymene)(1,2-dicarba-closo-dodecarborane-1,2-diselenate/dithiolate)] encapsulated in a triblock copolymer. The surfaces were characterized by energy-dispersive X-ray (EDX) analysis and electron energy loss spectroscopy of energy filtered TEM (EFTEM). Os atoms moved ca. 26× faster on the B/Se surface compared to the B/S surface (233 ± 34 pm·s(-1) versus 8.9 ± 1.9 pm·s(-1)). Os atoms formed dimers with an average Os-Os distance of 0.284 ± 0.077 nm on the B/Se surface and 0.243 ± 0.059 nm on B/S, close to that in metallic Os. The Os2 molecules moved 0.83× and 0.65× more slowly than single Os atoms on B/S and B/Se surfaces, respectively, and again markedly faster (ca. 20×) on the B/Se surface (151 ± 45 pm·s(-1) versus 7.4 ± 2.8 pm·s(-1)). Os atom motion did not follow Brownian motion and appears to involve anchoring sites, probably S and Se atoms. The ability to control the atomic motion of metal atoms and molecules on surfaces has potential for exploitation in nanodevices of the future. 3. The role of side chain conformational flexibility in surface recognition by Tenebrio molitor antifreeze protein Science.gov (United States) Daley, Margaret E.; Sykes, Brian D. 2003-01-01 Two-dimensional nuclear magnetic resonance spectroscopy was used to investigate the flexibility of the threonine side chains in the β-helical Tenebrio molitor antifreeze protein (TmAFP) at low temperatures. From measurement of the 3Jαβ 1H-1H scalar coupling constants, the χ1 angles and preferred rotamer populations can be calculated. It was determined that the threonines on the ice-binding face of the protein adopt a preferred rotameric conformation at near freezing temperatures, whereas the threonines not on the ice-binding face sample many rotameric states. This suggests that TmAFP maintains a preformed ice-binding conformation in solution, wherein the rigid array of threonines that form the AFP-ice interface matches the ice crystal lattice. A key factor in binding to the ice surface and inhibition of ice crystal growth appears to be the close surface-to-surface complementarity between the AFP and crystalline ice, and the lack of an entropic penalty associated with freezing out motions in a flexible ligand. PMID:12824479 4. Electron-induced desorption of europium atoms from oxidized tungsten surface: concentration dependence of low-energy peak CERN Document Server Davydov, S Y 2002-01-01 One discusses nature of electron induced desorption of Eu sup 0 europium atoms under E sub e irradiating electron low-energies (approx 30 eV) and peculiarities of yield dependence of Eu sup 0 atoms on their concentration at oxidized tungsten surface. Primary act of vacancy origination in europium adatom inner 5p-shell turned to be the determining stage. Evaluations have shown that just the first of two possible scenarios of ionization (electron intra-atomic to Eu adatom external quasi-level or realise of knocked out electron into vacuum) leads to Eu sup 0 desorption. One determined concentration threshold for yield of Eu sup 0 atoms 5. Surface relief gratings: experiments, physical scenarios, and photoinduced (anomalous) dynamics of functionalized polymer chains Science.gov (United States) Mitus, A. C.; Radosz, W.; Wysoczanski, T.; Pawlik, G. 2017-10-01 Surface Relief Gratings (SRG) were demonstrated experimentally more than 20 years ago. Despite many years of research efforts the underlying physical mechanisms remain unclear. In this paper we present a short overview of the main concepts related to SRG - photofluidization and its counterpart, the orientational approach - based on a seminal paper by Saphiannikova et al. Next, we summarize the derivation of the cos2 θ potential, following the lines of recent paper of this group. Those results validate the generic Monte Carlo model for the photoinduced build-up of the density and SRG gratings in a model polymer matrix functionalized with azo-dyes, presented in another part of the paper. The characterization of the photoinduced motion of polymer chains, based on our recent paper, is briefly discussed in the last part of the paper. This discussion offers a sound insight into the mechanisms responsible for inscription of SRG as well as for single functionalized nanoparticle studies. 6. The extraction of liquid, protein molecules and yeast cells from paper through surface acoustic wave atomization. Science.gov (United States) Qi, Aisha; Yeo, Leslie; Friend, James; Ho, Jenny 2010-02-21 Paper has been proposed as an inexpensive and versatile carrier for microfluidics devices with abilities well beyond simple capillary action for pregnancy tests and the like. Unlike standard microfluidics devices, extracting a fluid from the paper is a challenge and a drawback to its broader use. Here, we extract fluid from narrow paper strips using surface acoustic wave (SAW) irradiation that subsequently atomizes the extracted fluid into a monodisperse aerosol for use in mass spectroscopy, medical diagnostics, and drug delivery applications. Two protein molecules, ovalbumin and bovine serum albumin (BSA), have been preserved in paper and then extracted using atomized mist through SAW excitation; protein electrophoresis shows there is less than 1% degradation of either protein molecule in this process. Finally, a solution of live yeast cells was infused into paper, which was subsequently dried for preservation then remoistened to extract the cells via SAW atomization, yielding live cells at the completion of the process. The successful preservation and extraction of fluids, proteins and yeast cells significantly expands the usefulness of paper in microfluidics. 7. A first principle study for the adsorption and absorption of carbon atom and the CO dissociation on Ir(100) surface Energy Technology Data Exchange (ETDEWEB) Erikat, I. A., E-mail: [email protected] [Department of Physics, Jerash University, Jerash-26150 (Jordan); Hamad, B. A. [Department of Physics, The University of Jordan, Amman-11942 (Jordan) 2013-11-07 We employ density functional theory to examine the adsorption and absorption of carbon atom as well as the dissociation of carbon monoxide on Ir(100) surface. We find that carbon atoms bind strongly with Ir(100) surface and prefer the high coordination hollow site for all coverages. In the case of 0.75 ML coverage of carbon, we obtain a bridging metal structure due to the balance between Ir–C and Ir–Ir interactions. In the subsurface region, the carbon atom prefers the octahedral site of Ir(100) surface. We find large diffusion barrier for carbon atom into Ir(100) surface (2.70 eV) due to the strong bonding between carbon atom and Ir(100) surface, whereas we find a very small segregation barrier (0.22 eV) from subsurface to the surface. The minimum energy path and energy barrier for the dissociation of CO on Ir(100) surface are obtained by using climbing image nudge elastic band. The energy barrier of CO dissociation on Ir(100) surface is found to be 3.01 eV, which is appreciably larger than the association energy (1.61 eV) of this molecule. 8. A first principle study for the adsorption and absorption of carbon atom and the CO dissociation on Ir(100) surface Science.gov (United States) Erikat, I. A.; Hamad, B. A. 2013-11-01 We employ density functional theory to examine the adsorption and absorption of carbon atom as well as the dissociation of carbon monoxide on Ir(100) surface. We find that carbon atoms bind strongly with Ir(100) surface and prefer the high coordination hollow site for all coverages. In the case of 0.75 ML coverage of carbon, we obtain a bridging metal structure due to the balance between Ir-C and Ir-Ir interactions. In the subsurface region, the carbon atom prefers the octahedral site of Ir(100) surface. We find large diffusion barrier for carbon atom into Ir(100) surface (2.70 eV) due to the strong bonding between carbon atom and Ir(100) surface, whereas we find a very small segregation barrier (0.22 eV) from subsurface to the surface. The minimum energy path and energy barrier for the dissociation of CO on Ir(100) surface are obtained by using climbing image nudge elastic band. The energy barrier of CO dissociation on Ir(100) surface is found to be 3.01 eV, which is appreciably larger than the association energy (1.61 eV) of this molecule. 9. A first principle study for the adsorption and absorption of carbon atom and the CO dissociation on Ir(100) surface International Nuclear Information System (INIS) Erikat, I. A.; Hamad, B. A. 2013-01-01 We employ density functional theory to examine the adsorption and absorption of carbon atom as well as the dissociation of carbon monoxide on Ir(100) surface. We find that carbon atoms bind strongly with Ir(100) surface and prefer the high coordination hollow site for all coverages. In the case of 0.75 ML coverage of carbon, we obtain a bridging metal structure due to the balance between Ir–C and Ir–Ir interactions. In the subsurface region, the carbon atom prefers the octahedral site of Ir(100) surface. We find large diffusion barrier for carbon atom into Ir(100) surface (2.70 eV) due to the strong bonding between carbon atom and Ir(100) surface, whereas we find a very small segregation barrier (0.22 eV) from subsurface to the surface. The minimum energy path and energy barrier for the dissociation of CO on Ir(100) surface are obtained by using climbing image nudge elastic band. The energy barrier of CO dissociation on Ir(100) surface is found to be 3.01 eV, which is appreciably larger than the association energy (1.61 eV) of this molecule 10. Pt Single Atoms Embedded in the Surface of Ni Nanocrystals as Highly Active Catalysts for Selective Hydrogenation of Nitro Compounds. Science.gov (United States) Peng, Yuhan; Geng, Zhigang; Zhao, Songtao; Wang, Liangbing; Li, Hongliang; Wang, Xu; Zheng, Xusheng; Zhu, Junfa; Li, Zhenyu; Si, Rui; Zeng, Jie 2018-06-13 Single-atom catalysts exhibit high selectivity in hydrogenation due to their isolated active sites, which ensure uniform adsorption configurations of substrate molecules. Compared with the achievement in catalytic selectivity, there is still a long way to go in exploiting the catalytic activity of single-atom catalysts. Herein, we developed highly active and selective catalysts in selective hydrogenation by embedding Pt single atoms in the surface of Ni nanocrystals (denoted as Pt 1 /Ni nanocrystals). During the hydrogenation of 3-nitrostyrene, the TOF numbers based on surface Pt atoms of Pt 1 /Ni nanocrystals reached ∼1800 h -1 under 3 atm of H 2 at 40 °C, much higher than that of Pt single atoms supported on active carbon, TiO 2 , SiO 2 , and ZSM-5. Mechanistic studies reveal that the remarkable activity of Pt 1 /Ni nanocrystals derived from sufficient hydrogen supply because of spontaneous dissociation of H 2 on both Pt and Ni atoms as well as facile diffusion of H atoms on Pt 1 /Ni nanocrystals. Moreover, the ensemble composed of the Pt single atom and nearby Ni atoms in Pt 1 /Ni nanocrystals leads to the adsorption configuration of 3-nitrostyrene favorable for the activation of nitro groups, accounting for the high selectivity for 3-vinylaniline. 11. Analysis and Calibration of in situ scanning tunnelling microscopy Images with atomic Resolution Influenced by Surface Drift Phenomena DEFF Research Database (Denmark) Andersen, Jens Enevold Thaulov; Møller, Per 1994-01-01 The influence of surface drift velocities on in situ scanning tunnelling microscopy (STM) experiments with atomic resolution is analysed experimentally and mathematically. Constant drift velocities much smaller than the speed of scanning can in many in situ STM experiments with atomic resolution ...... as well as the vectors of the non-distorted surface lattice can be determined. The calibration of distances can thus be carried out also when the image is influenced by drift. Results with gold surfaces and graphite surfaces are analysed and discussed.... 12. Attractive interaction between Mn atoms on the GaAs(110) surface observed by scanning tunneling microscopy. Science.gov (United States) Taninaka, Atsushi; Yoshida, Shoji; Kanazawa, Ken; Hayaki, Eiko; Takeuchi, Osamu; Shigekawa, Hidemi 2016-06-16 Scanning tunneling microscopy/spectroscopy (STM/STS) was carried out to investigate the structures of Mn atoms deposited on a GaAs(110) surface at room temperature to directly observe the characteristics of interactions between Mn atoms in GaAs. Mn atoms were paired with a probability higher than the random distribution, indicating an attractive interaction between them. In fact, re-pairing of unpaired Mn atoms was observed during STS measurement. The pair initially had a new structure, which was transformed during STS measurement into one of those formed by atom manipulation at 4 K. Mn atoms in pairs and trimers were aligned in the direction, which is theoretically predicted to produce a high Curie temperature. 13. Mechanisms for the reflection of light atoms from crystal surfaces at kilovolt energies International Nuclear Information System (INIS) Hou, M.; Robinson, M.T. 1978-01-01 The computer program MARLOWE was used to investigate the backscattering of protons from the (110) surface of a nickel crystal. Grazing incidence was considered so that anisotropic effects originated mainly from the surface region. The contribution of aligned scattering was studied by comparing the results with similar calculations for an amorphous target. Energy distributions of backscattered particles were investigated for incident energies ranging from 0.1 to 5 keV. The structure of these distributions was explained by making calculations for several target thickness. Specular reflection was found to depend on the structure of the first few atomic planes only. The (110) rows in the surface plane were responsible for focusing into surface semichannels. Focusing in these semichannels was found to be the strongest under total reflection conditions (below about 1.3 keV) while the scattering intensity from surface rows increased with increasing incident energy. The orientation of the plane of incidence was found to have large influence on the relative contributions of the reflection mechanisms involved. (orig.) [de 14. Effect of atomic layer deposition coatings on the surface structure of anodic aluminum oxide membranes. Science.gov (United States) Xiong, Guang; Elam, Jeffrey W; Feng, Hao; Han, Catherine Y; Wang, Hsien-Hau; Iton, Lennox E; Curtiss, Larry A; Pellin, Michael J; Kung, Mayfair; Kung, Harold; Stair, Peter C 2005-07-28 Anodic aluminum oxide (AAO) membranes were characterized by UV Raman and FT-IR spectroscopies before and after coating the entire surface (including the interior pore walls) of the AAO membranes by atomic layer deposition (ALD). UV Raman reveals the presence of aluminum oxalate in bulk AAO, both before and after ALD coating with Al2O3, because of acid anion incorporation during the anodization process used to produce AAO membranes. The aluminum oxalate in AAO exhibits remarkable thermal stability, not totally decomposing in air until exposed to a temperature >900 degrees C. ALD was used to cover the surface of AAO with either Al2O3 or TiO2. Uncoated AAO have FT-IR spectra with two separate types of OH stretches that can be assigned to isolated OH groups and hydrogen-bonded surface OH groups, respectively. In contrast, AAO surfaces coated by ALD with Al2O3 display a single, broad band of hydrogen-bonded OH groups. AAO substrates coated with TiO2 show a more complicated behavior. UV Raman results show that very thin TiO2 coatings (1 nm) are not stable upon annealing to 500 degrees C. In contrast, thicker coatings can totally cover the contaminated alumina surface and are stable at temperatures in excess of 500 degrees C. 15. Semiclassical multi-phonon theory for atom-surface scattering: Application to the Cu(111) system. Science.gov (United States) Daon, Shauli; Pollak, Eli 2015-05-07 The semiclassical perturbation theory of Hubbard and Miller [J. Chem. Phys. 80, 5827 (1984)] is further developed to include the full multi-phonon transitions in atom-surface scattering. A practically applicable expression is developed for the angular scattering distribution by utilising a discretized bath of oscillators, instead of the continuum limit. At sufficiently low surface temperature good agreement is found between the present multi-phonon theory and the previous one-, and two-phonon theory derived in the continuum limit in our previous study [Daon, Pollak, and Miret-Artés, J. Chem. Phys. 137, 201103 (2012)]. The theory is applied to the measured angular distributions of Ne, Ar, and Kr scattered from a Cu(111) surface. We find that the present multi-phonon theory substantially improves the agreement between experiment and theory, especially at the higher surface temperatures. This provides evidence for the importance of multi-phonon transitions in determining the angular distribution as the surface temperature is increased. 16. Silicon surface passivation using thin HfO2 films by atomic layer deposition International Nuclear Information System (INIS) Gope, Jhuma; Vandana; Batra, Neha; Panigrahi, Jagannath; Singh, Rajbir; Maurya, K.K.; Srivastava, Ritu; Singh, P.K. 2015-01-01 Graphical abstract: - Highlights: • HfO 2 films using thermal ALD are studied for silicon surface passivation. • As-deposited thin film (∼8 nm) shows better passivation with surface recombination velocity (SRV) <100 cm/s. • Annealing improves passivation quality with SRV ∼20 cm/s for ∼8 nm film. - Abstract: Hafnium oxide (HfO 2 ) is a potential material for equivalent oxide thickness (EOT) scaling in microelectronics; however, its surface passivation properties particularly on silicon are not well explored. This paper reports investigation on passivation properties of thermally deposited thin HfO 2 films by atomic layer deposition system (ALD) on silicon surface. As-deposited pristine film (∼8 nm) shows better passivation with <100 cm/s surface recombination velocity (SRV) vis-à-vis thicker films. Further improvement in passivation quality is achieved with annealing at 400 °C for 10 min where the SRV reduces to ∼20 cm/s. Conductance measurements show that the interface defect density (D it ) increases with film thickness whereas its value decreases after annealing. XRR data corroborate with the observations made by FTIR and SRV data. 17. Investigating biomolecular recognition at the cell surface using atomic force microscopy. Science.gov (United States) 2014-05-01 Probing the interaction forces that drive biomolecular recognition on cell surfaces is essential for understanding diverse biological processes. Force spectroscopy has been a widely used dynamic analytical technique, allowing measurement of such interactions at the molecular and cellular level. The capabilities of working under near physiological environments, combined with excellent force and lateral resolution make atomic force microscopy (AFM)-based force spectroscopy a powerful approach to measure biomolecular interaction forces not only on non-biological substrates, but also on soft, dynamic cell surfaces. Over the last few years, AFM-based force spectroscopy has provided biophysical insight into how biomolecules on cell surfaces interact with each other and induce relevant biological processes. In this review, we focus on describing the technique of force spectroscopy using the AFM, specifically in the context of probing cell surfaces. We summarize recent progress in understanding the recognition and interactions between macromolecules that may be found at cell surfaces from a force spectroscopy perspective. We further discuss the challenges and future prospects of the application of this versatile technique. Copyright © 2014 Elsevier Ltd. All rights reserved. 18. Ex situ investigation of the step bunching on crystal surfaces by atomic force microscopy Science.gov (United States) Krasinski, Mariusz J. 1997-07-01 We are describing ex situ observation of step bunching on the surfaces of solution grown potassium dihydrogen phosphate (KDP) and sodium chlorate monocrystals. The measurements have been done with the use of atomic force microscope. The use of this equipment allowed us to see directly the structure of macrosteps. Observation confirmed the existence of step pinning which is one of the proposed mechanisms of step bunching. Despite the very high resolution of AFM it was not possible to determine the nature of pinning point. The monatomic steps on KDP and sodium chlorate crystal surfaces are mainly one unit cell high what seems to be the result of the steps pairing. The origin of observed step pattern is discussed in frames of existing theories. 19. Micro and nanostructural characterization of surfaces and interfaces of Portland cement mortars using atomic force microscopy International Nuclear Information System (INIS) Barreto, M.F.O.; Brandao, P.R.G. 2014-01-01 The characterization of Portland cement mortars is very important in the study the interfaces and surfaces that make up the system grout/ceramic block. In this sense, scanning electron microscopy and energy-dispersive (X-ray) spectrometer are important tools in investigating the morphology and chemical aspects. However, more detailed topographic information can be necessary in the characterization process. In this work, the aim was to characterize topographically surfaces and interfaces of mortars applied onto ceramic blocks. This has been accomplished by using the atomic force microscope (AFM) - MFP-3D-SA Asylum Research. To date, the results obtained from this research show that the characterization of cementitious materials with the help of AFM has an important contribution in the investigation and differentiation of hydrated calcium silicates (CSH), calcium hydroxide (Ca(OH)2, ettringite and calcium carbonate by providing morphological and micro topographical data, which are extremely important and reliable for the understanding of cementitious materials. (author) 20. Nanoscopic morphological changes in yeast cell surfaces caused by oxidative stress: an atomic force microscopic study. Science.gov (United States) Canetta, Elisabetta; Walker, Graeme M; Adya, Ashok K 2009-06-01 Nanoscopic changes in the cell surface morphology of the yeasts Saccharomyces cerevisiae (strain NCYC 1681) and Schizosaccharomyces pombe (strain DVPB 1354), due to their exposure to varying concentrations of hydrogen peroxide (oxidative stress), were investigated using an atomic force microscope (AFM). Increasing hydrogen peroxide concentration led to a decrease in cell viabilities and mean cell volumes, and an increase in the surface roughness of the yeasts. In addition, AFM studies revealed that oxidative stress caused cell compression in both S. cerevisiae and Schiz. pombe cells and an increase in the number of aged yeasts. These results confirmed the importance and usefulness of AFM in investigating the morphology of stressed microbial cells at the nanoscale. The results also provided novel information on the relative oxidative stress tolerance of S. cerevisiae and Schiz. pombe. 1. Atomic-level spatial distributions of dopants on silicon surfaces: toward a microscopic understanding of surface chemical reactivity Science.gov (United States) Hamers, Robert J.; Wang, Yajun; Shan, Jun 1996-11-01 We have investigated the interaction of phosphine (PH 3) and diborane (B 2H 6) with the Si(001) surface using scanning tunneling microscopy, infrared spectroscopy, and ab initio molecular orbital calculations. Experiment and theory show that the formation of PSi heterodimers is energetically favorable compared with formation of PP dimers. The stability of the heterodimers arises from a large strain energy associated with formation of PP dimers. At moderate P coverages, the formation of PSi heterodimers leaves the surface with few locations where there are two adjacent reactive sites. This in turn modifies the chemical reactivity toward species such as PH 3, which require only one site to adsorb but require two adjacent sites to dissociate. Boron on Si(001) strongly segregates into localized regions of high boron concentration, separated by large regions of clean Si. This leads to a spatially-modulated chemical reactivity which during subsequent growth by chemical vapor deposition (CVD) leads to formation of a rough surface. The implications of the atomic-level spatial distribution of dopants on the rates and mechanisms of CVD growth processes are discussed. 2. Atom-specific look at the surface chemical bond using x-ray emission spectroscopy Energy Technology Data Exchange (ETDEWEB) Nilsson, A.; Wassdahl, N.; Weinelt, M. [Uppsala Univ. (Sweden)] [and others 1997-04-01 CO and N{sub 2} adsorbed on the late transition metals have become prototype systems regarding the general understanding of molecular adsorption. It is in general assumed that the bonding of molecules to transition metals can be explained in terms of the interaction of the frontier HOMO and LUMO molecular orbitals with the d-orbitals. In such a picture the other molecular orbitals should remain essentially the same as in the free molecule. For the adsorption of the isoelectronic molecules CO and N{sub 2} this has led to the so called Blyholder model i.e., a synergetic {sigma} (HOMO) donor and {pi} (LUMO) backdonation bond. The authors results at the ALS show that such a picture is oversimplified. The direct observation and identification of the states related to the surface chemical bond is an experimental challenge. For noble and transition metal surfaces, the adsorption induced states overlap with the metal d valence band. Their signature is therefore often obscured by bulk substrate states. This complication has made it difficult for techniques such as photoemission and inverse photoemission to provide reliable information on the energy of chemisorption induced states and has left questions unanswered regarding the validity of the frontier orbitals concept. Here the authors show how x-ray emission spectroscopy (XES), in spite of its inherent bulk sensitivity, can be used to investigate adsorbed molecules. Due to the localization of the core-excited intermediate state, XE spectroscopy allows an atomic specific separation of the valence electronic states. Thus the molecular contributions to the surface measurements make it possible to determine the symmetry of the molecular states, i.e., the separation of {pi} and {sigma} type states. In all the authors can obtain an atomic view of the electronic states involved in the formation of the chemical bond to the surface. 3. Atomic-scale simulation of dust grain collisions: Surface chemistry and dissipation beyond existing theory Science.gov (United States) Quadery, Abrar H.; Doan, Baochi D.; Tucker, William C.; Dove, Adrienne R.; Schelling, Patrick K. 2017-10-01 The early stages of planet formation involve steps where submicron-sized dust particles collide to form aggregates. However, the mechanism through which millimeter-sized particles aggregate to kilometer-sized planetesimals is still not understood. Dust grain collision experiments carried out in the environment of the Earth lead to the prediction of a 'bouncing barrier' at millimeter-sizes. Theoretical models, e.g., Johnson-Kendall-Roberts and Derjaguin-Muller-Toporov theories, lack two key features, namely the chemistry of dust grain surfaces, and a mechanism for atomic-scale dissipation of energy. Moreover, interaction strengths in these models are parameterized based on experiments done in the Earth's environment. To address these issues, we performed atomic-scale simulations of collisions between nonhydroxylated and hydroxylated amorphous silica nanoparticles. We used the ReaxFF approach which enables modeling chemical reactions using an empirical potential. We found that nonhydroxylated nanograins tend to adhere with much higher probability than suggested by existing theories. By contrast, hydroxylated nanograins exhibit a strong tendency to bounce. Also, the interaction between dust grains has the characteristics of a strong chemical force instead of weak van der Waals forces. This suggests that the formation of strong chemical bonds and dissipation via internal atomic vibration may result in aggregation beyond what is expected based on our current understanding. Our results also indicate that experiments should more carefully consider surface conditions to mimic the space environment. We also report results of simulations with molten silica nanoparticles. It is found that molten particles are more likely to adhere due to viscous dissipation, which supports theories that suggest aggregation to kilometer scales might require grains to be in a molten state. 4. Evolution of the Contact Area with Normal Load for Rough Surfaces: from Atomic to Macroscopic Scales. Science.gov (United States) Huang, Shiping 2017-11-13 The evolution of the contact area with normal load for rough surfaces has great fundamental and practical importance, ranging from earthquake dynamics to machine wear. This work bridges the gap between the atomic scale and the macroscopic scale for normal contact behavior. The real contact area, which is formed by a large ensemble of discrete contacts (clusters), is proven to be much smaller than the apparent surface area. The distribution of the discrete contact clusters and the interaction between them are key to revealing the mechanism of the contacting solids. To this end, Green's function molecular dynamics (GFMD) is used to study both how the contact cluster evolves from the atomic scale to the macroscopic scale and the interaction between clusters. It is found that the interaction between clusters has a strong effect on their formation. The formation and distribution of the contact clusters is far more complicated than that predicted by the asperity model. Ignorance of the interaction between them leads to overestimating the contacting force. In real contact, contacting clusters are smaller and more discrete due to the interaction between the asperities. Understanding the exact nature of the contact area with the normal load is essential to the following research on friction. 5. Morphology and microstructure of Ag islands of aggregated atoms on oil surfaces Institute of Scientific and Technical Information of China (English) Zhang Chu-Hang; Lü Neng; Zhang Xiao-Fei; Saida Ajeeb; Xia A-Gen; Ye Gao-Xiang 2011-01-01 The morphology evolution of silver islands on silicone oil surfaces is measured and the microstructure of the islands is studied. The deposited Ag atoms diffuse and aggregate on the oil surface and then Ag islands with the width of the order of 102-nm form. After the samples are removed from the vacuum chamber, the immediate measurement shows that the apparent Ag coverage of the total area decays with the magnitude up to (23.0±3.8)% in few minutes. In the following two hours, the samples are kept in the ambient atmosphere and several unexpected results are detected: 1)as the topological structure of the islands evolves, the total area of each island decreases gradually and the maximum decrement measured is around 20%; 2) if an island breaks and becomes two small pieces, the total area decreases obviously; 3) however, if two small islands meet and stick together, a sudden increment of the total area is observed.These phenomena, mirroring the evolution process of the island microstructure, are resulted from both the diffusion of the atoms and the combination of the defects inside the islands. 6. Atomistic modeling determination of placeholder binding energy of Ti, C, and N atoms on a-Fe (100) surfaces International Nuclear Information System (INIS) Wei, X J; Liu, Y P; Han, S P 2015-01-01 A Fe(100) surface containing Ti, C, and N was constructed and optimized to study the placeholder binding energy of the Ti, C, and N surface atoms; this was achieved by searching the transition state with the LST (linear synchronous transit) method of the CASTEP (Cambridge Serial Total Energy Package) module. Also, the authors analyzed electron structures to determine how Ti, C, and N atoms strengthen the Fe(100) surface. The results show that when Ti, C, or N atoms take placeholder alone, or simultaneously at the Fe(100) surface, the structure stability is at its best. When including Ti, C, and N as solid solutions on the Fe(100) surface, orbital electrons of Fe3d, Ti3d, C2p, and N2p hybridize near the Fermi level; the number of electronic bonding peaks increase and bonding capacity enhances. Also, a large amount of covalent bonds formed. Covalent bonds and metallic bond coexisted. (paper) 7. Assembling three-dimensional nanostructures on metal surfaces with a reversible vertical single-atom manipulation: A theoretical modeling International Nuclear Information System (INIS) Yang Tianxing; Ye Xiang; Huang Lei; Xie Yiqun; Ke Sanhuang 2012-01-01 Highlights: ► We simulate the reversible vertical single-atom manipulations on several metal surfaces. ► We propose a method to predict whether a reversible vertical single-atom manipulation can be successful on several metal surfaces. ► A 3-dimensional Ni nanocluster is assembled on the Ni(1 1 1) surface using a Ni trimer-apex tip. - Abstract: We propose a theoretical model to show that pulling up an adatom from an atomic step requires a weaker force than from the flat surfaces of Al(0 0 1), Ni(1 1 1), Pt(1 1 0) and Au(1 1 0). Single adatom in the atomic step can be extracted vertically by a trimer-apex tip while can be released to the flat surface. This reversible vertical manipulation can then be used to fabricate a supported three-dimensional (3D) nanostructure on the Ni(1 1 1) surface. The present modeling can be used to predict whether the reversible vertical single-atom manipulation and thus the assembling of 3D nanostructures can be achieved on a metal surface. 8. Simultaneous measurement of the surface temperature and the release of atomic sodium from a burning black liquor droplet Energy Technology Data Exchange (ETDEWEB) Saw, Woei L.; Nathan, Graham J. [Centre for Energy Technology, The University of Adelaide, SA 5006 (Australia); School of Mechanical Engineering, The University of Adelaide (Australia); Ashman, Peter J.; Alwahabi, Zeyad T. [Centre for Energy Technology, The University of Adelaide, SA 5006 (Australia); School of Chemical Engineering, The University of Adelaide (Australia); Hupa, Mikko [Process Chemistry Centre, Aabo Akademi, Biskopsgatan 8 FI-20500 Aabo (Finland) 2010-04-15 Simultaneous measurement of the concentration of released atomic sodium, swelling, surface and internal temperature of a burning black liquor droplet under a fuel lean and rich condition has been demonstrated. Two-dimensional two-colour optical pyrometry was employed to determine the distribution of surface temperature and swelling of a burning black liquor droplet while planar laser-induced fluorescence (PLIF) was used to assess the temporal release of atomic sodium. The key findings of these studies are: (i) the concentration of atomic sodium released during the drying and devolatilisation stages was found to be correlated with the external surface area; and (ii) the insignificant presence of atomic sodium during the char consumption stage shows that sodium release is suppressed by the lower temperature and by the high CO{sub 2} content in and around the particle. (author) 9. Quantifying the importance of galactofuranose in Aspergillus nidulans hyphal wall surface organization by atomic force microscopy. Science.gov (United States) Paul, Biplab C; El-Ganiny, Amira M; Abbas, Mariam; Kaminskyj, Susan G W; Dahms, Tanya E S 2011-05-01 The fungal wall mediates cell-environment interactions. Galactofuranose (Galf), the five-member ring form of galactose, has a relatively low abundance in Aspergillus walls yet is important for fungal growth and fitness. Aspergillus nidulans strains deleted for Galf biosynthesis enzymes UgeA (UDP-glucose-4-epimerase) and UgmA (UDP-galactopyranose mutase) lacked immunolocalizable Galf, had growth and sporulation defects, and had abnormal wall architecture. We used atomic force microscopy and force spectroscopy to image and quantify cell wall viscoelasticity and surface adhesion of ugeAΔ and ugmAΔ strains. We compared the results for ugeAΔ and ugmAΔ strains with the results for a wild-type strain (AAE1) and the ugeB deletion strain, which has wild-type growth and sporulation. Our results suggest that UgeA and UgmA are important for cell wall surface subunit organization and wall viscoelasticity. The ugeAΔ and ugmAΔ strains had significantly larger surface subunits and lower cell wall viscoelastic moduli than those of AAE1 or ugeBΔ hyphae. Double deletion strains (ugeAΔ ugeBΔ and ugeAΔ ugmAΔ) had more-disorganized surface subunits than single deletion strains. Changes in wall surface structure correlated with changes in its viscoelastic modulus for both fixed and living hyphae. Wild-type walls had the largest viscoelastic modulus, while the walls of the double deletion strains had the smallest. The ugmAΔ strain and particularly the ugeAΔ ugmAΔ double deletion strain were more adhesive to hydrophilic surfaces than the wild type, consistent with changes in wall viscoelasticity and surface organization. We propose that Galf is necessary for full maturation of A. nidulans walls during hyphal extension. 10. Surface modes of ultra-cold atomic clouds with very large number of vortices Energy Technology Data Exchange (ETDEWEB) Cazalilla, M A [Donostia International Physics Center, Donostia (Spain); [Abdus Salam International Centre for Theoretical Physics, Trieste (Italy) 2003-04-01 We study the surface modes of some of the vortex liquids recently found by means of exact diagonalizations in systems of rapidly rotating bosons. In contrast to the surface modes of Bose condensates, we find that the surface waves have a frequency linear in the excitation angular momentum, h-bar l > 0. Furthermore, in analogy with the edge waves of electronic quantum Hall states, these excitations are chiral, that is, they can be excited only for values of l that increase the total angular momentum of the vortex liquid. However, differently from the quantum Hall phenomena for electrons, we also find other excitations that are approximately degenerate in the laboratory frame with the surface modes, and which decrease the total angular momentum by l quanta. The surface modes of the Laughlin, as well as other scalar and vector boson states are analyzed, and their observable properties characterized. We argue that measurement of the response of a vortex liquid to a weak time-dependent potential that imparts angular momentum to the system should provide valuable information to characterize the vortex liquid. In particular, the intensity of the signal of the surface waves in the dynamic structure factor has been studied and found to depend on the type of vortex liquid. We point out that the existence of surface modes has observable consequences on the density profile of the Laughlin state. These features are due to the strongly correlated behavior of atoms in the vortex liquids. We point out that these correlations should be responsible for a remarkable stability of some vortex liquids with respect to three-body losses. (author) 11. Effect of surface functionalisation on the interaction of iron oxide nanoparticles with polymerase chain reaction. Science.gov (United States) Aysan, Ayse Beyza; Knejzlík, Zdeněk; Ulbrich, Pavel; Šoltys, Marek; Zadražil, Aleš; Štěpánek, František 2017-05-01 The combination of nanoparticles with the polymerase chain reaction (PCR) can have benefits such as easier sample handling or higher sensitivity, but also drawbacks such as loss of colloidal stability or inhibition of the PCR. The present work systematically investigates the interaction of magnetic iron oxide nanoparticles (MIONs) with the PCR in terms of colloidal stability and potential PCR inhibition due to interaction between the PCR components and the nanoparticle surface. Several types of MIONs with and without surface functionalisation by sodium citrate, dextran and 3-aminopropyl-triethoxysilane (APTES) were prepared and characterised by Transmission Electron Microscopy (TEM), dynamic light scattering (DLS) and Fourier Transform Infrared (FT-IR) spectroscopy. Colloidal stability in the presence of the PCR components was investigated both at room temperature and under PCR thermo-cycling. Dextran-stabilized MIONs show the best colloidal stability in the PCR mix at both room and elevated temperatures. Citrate- and APTES-stabilised as well as uncoated MIONs show a comparable PCR inhibition near the concentration 0.1mgml -1 while the inhibition of dextran stabilized MIONs became apparent near 0.5mgml -1 . It was demonstrated that the PCR could be effectively carried out even in the presence of elevated concentration of MIONs up to 2mgml -1 by choosing the right coating approach and supplementing the reaction mix by critical components, Taq DNA polymerase and Mg 2+ ions. Copyright © 2017 Elsevier B.V. All rights reserved. 12. Surface photovoltage investigation of gold chains on Si(111) by two-photon photoemission Energy Technology Data Exchange (ETDEWEB) Otto, Sebastian; Biedermann, Kerstin; Fauster, Thomas [Lehrstuhl fuer Festkoerperphysik, Universitaet Erlangen-Nuernberg, Staudtstr. 7, D-91058 Erlangen (Germany) 2011-07-01 We present surface photovoltage measurements on Si(111)-(7 x 7) with monoatomic gold chains. The gold coverage was varied between zero and 0.6 ML, where the Si(111)-(5 x 2)-Au reconstruction covers the surface completely. During the two-photon photoemission experiments the p- or n-doped samples were illuminated by infrared (IR, E{sub IR}=1.55 eV) and ultraviolet (UV, E{sub UV}=4.65 eV) laser pulses. For all coverages the photovoltage was determined for sample temperatures of 90 K and 300 K by variation of the IR and UV laser power. P-doped as well as n-doped Si(111) wafers show a linear dependence of the photovoltage on gold coverage. This stands in contrast to scanning tunneling spectroscopy measurements, which show a coverage-independent photovoltage over a wide coverage range for n-doped wafers. While for p-doped wafers our experimentally determined photovoltage is in agreement with previous reports, for n-doped wafers the observed values are lower than expected. 13. Influence of short chain organic acids and bases on the wetting properties and surface energy of submicrometer ceramic powders. Science.gov (United States) Neirinck, Bram; Soccol, Dimitri; Fransaer, Jan; Van der Biest, Omer; Vleugels, Jef 2010-08-15 The effect of short chained organic acids and bases on the surface energy and wetting properties of submicrometer alumina powder was assessed. The surface chemistry of treated powders was determined by means of Diffuse Reflectance Infrared Fourier Transform spectroscopy and compared to untreated powder. The wetting of powders was measured using a modified Washburn method, based on the use of precompacted powder samples. The geometric factor needed to calculate the contact angle was derived from measurements of the porous properties of the powder compacts. Contact angle measurements with several probe liquids before and after modification allowed a theoretical estimation of the surface energy based on the surface tension component theory. Trends in the surface energy components were linked to observations in infrared spectra. The results showed that the hydrophobic character of the precompacted powder depends on both the chain length and polar group of the modifying agent. Copyright 2010 Elsevier Inc. All rights reserved. 14. Differential MS2 Interaction with Food Contact Surfaces Determined by Atomic Force Microscopy and Virus Recovery. Science.gov (United States) Shim, J; Stewart, D S; Nikolov, A D; Wasan, D T; Wang, R; Yan, R; Shieh, Y C 2017-12-15 15. Light emission from sputtered or backscattered atoms on tungsten surfaces under ion irradiation International Nuclear Information System (INIS) Sakai, Yasuhiro; Nogami, Keisuke; Kato, Daiji; Sakaue, Hiroyuki A.; Kenmotsu, Takahiko; Furuya, Kenji; Motohashi, Kenji 2013-01-01 We measured the intensity of light emission from sputtered atoms on tungsten surfaces under the irradiations of Kr"+ ion and Ar"+ ion, as a function of the perpendicular distance from the surface. Using the analysis of decay curve, we estimated the mean vertical velocity component in direction normal to the surface. We found that the estimated mean velocity had much differences according to the excited state. For example, although the estimated mean vertical velocity component normal to the surface from the 400.9 nm line((5d"5(6S)6p "7p_4→(5d"5(6S)6s "7S_3 transition) was 5.6±1.7 km/sec, that from the 386.8 nm line((5d"4(6S)6p "7D_4→(5d"5(6S)6s "7S_4 transition) was 2.8±1.0 km/sec. However, for different projectiles and energies, we found no remarkable changes in the velocity. (author) 16. Adsorption of atomic oxygen on PdAg/Pd(111) surface alloys and coadsorption of CO Energy Technology Data Exchange (ETDEWEB) Farkas, Arnold P. [Institute of Surface Chemistry and Catalysis, Ulm University, D-89069 Ulm (Germany); Reaction Kinetics Research Group, University of Szeged, Chemical Research Center of the Hungarian Academy of Sciences, H-6720 Szeged (Hungary); Bansmann, Joachim; Diemant, Thomas; Behm, R. Juergen [Institute of Surface Chemistry and Catalysis, Ulm University, D-89069 Ulm (Germany) 2011-07-01 The interaction of dissociated oxygen with structurally well-defined PdAg/Pd(111) surface alloys and the coadsorption of CO was studied by high resolution electron energy loss spectroscopy (HREELS) and temperature-programmed desorption (TPD). After oxygen saturation of the non-modified Pd(111) surface at RT, we observed the formation of a prominent peak in the HREEL spectra at 60 meV corresponding to the perpendicular vibration of oxygen atoms adsorbed in threefold hollow sites. Deposition of small Ag amounts does not change the signal intensity of this peak; it decreases only above 20% Ag. Beyond this Ag content, the peak intensity steeply declines and disappears at around 55-60% Ag. CO coadsorption on the oxygen pre-covered surfaces at 120 K leads to the formation of additional features in HREELS. For a surface alloy with 29% Ag, three loss features due to CO adsorption in on-top, bridge, and threefold-hollow sites can be discriminated already after the lowest CO exposure. Annealing of the co-adsorbed layer to 200 K triggers a decrease of the oxygen concentration due to CO{sub 2} formation. These findings are corroborated by TPD spectra of the CO desorption and CO{sub 2} production. 17. Analysis of atomic force microscopy data for surface characterization using fuzzy logic International Nuclear Information System (INIS) Al-Mousa, Amjed; Niemann, Darrell L.; Niemann, Devin J.; Gunther, Norman G.; Rahman, Mahmud 2011-01-01 In this paper we present a methodology to characterize surface nanostructures of thin films. The methodology identifies and isolates nanostructures using Atomic Force Microscopy (AFM) data and extracts quantitative information, such as their size and shape. The fuzzy logic based methodology relies on a Fuzzy Inference Engine (FIE) to classify the data points as being top, bottom, uphill, or downhill. The resulting data sets are then further processed to extract quantitative information about the nanostructures. In the present work we introduce a mechanism which can consistently distinguish crowded surfaces from those with sparsely distributed structures and present an omni-directional search technique to improve the structural recognition accuracy. In order to demonstrate the effectiveness of our approach we present a case study which uses our approach to quantitatively identify particle sizes of two specimens each with a unique gold nanoparticle size distribution. - Research Highlights: → A Fuzzy logic analysis technique capable of characterizing AFM images of thin films. → The technique is applicable to different surfaces regardless of their densities. → Fuzzy logic technique does not require manual adjustment of the algorithm parameters. → The technique can quantitatively capture differences between surfaces. → This technique yields more realistic structure boundaries compared to other methods. 18. Imaging surface nanobubbles at graphite–water interfaces with different atomic force microscopy modes International Nuclear Information System (INIS) Yang, Chih-Wen; Lu, Yi-Hsien; Hwang, Ing-Shouh 2013-01-01 We have imaged nanobubbles on highly ordered pyrolytic graphite (HOPG) surfaces in pure water with different atomic force microscopy (AFM) modes, including the frequency-modulation, the tapping, and the PeakForce techniques. We have compared the performance of these modes in obtaining the surface profiles of nanobubbles. The frequency-modulation mode yields a larger height value than the other two modes and can provide more accurate measurement of the surface profiles of nanobubbles. Imaging with PeakForce mode shows that a nanobubble appears smaller and shorter with increasing peak force and disappears above a certain peak force, but the size returns to the original value when the peak force is reduced. This indicates that imaging with high peak forces does not cause gas removal from the nanobubbles. Based on the presented findings and previous AFM observations, the existing models for nanobubbles are reviewed and discussed. The model of gas aggregate inside nanobubbles provides a better explanation for the puzzles of the high stability and the contact angle of surface nanobubbles. (paper) 19. Path-integral theory of the scattering of 4He atoms at the surface of liquid 4He International Nuclear Information System (INIS) Swanson, D.R.; Edwards, D.O. 1988-01-01 The path-integral theory of the scattering of a 4 He atom near the free surface of liquid 4 He, which was originally formulated by Echenique and Pendry, has been recalculated with use of a physically realistic static potential and atom-ripplon interaction outside the liquid. The static potential and atom-ripplon interaction are based on the variational calculation of Edwards and Fatouros. An important assumption in the path-integral theory is the ''impulse approximation'': that the motion of the scattered atom is very fast compared with the motion of the surface due to ripplons. This is found to be true only for ripplons with wave vectors smaller than q/sub m/∼0.2 A/sup -1/. If ripplons above q/sub m/ made an important contribution to the scattering of the atom there would be a substantial dependence of the elastic reflection coefficient on the angle of incidence of the atom. Since this is not observed experimentally, it is argued that ripplons above q/sub m/ give a negligible effect and should be excluded from the calculation. With this modification the theory gives a good fit to the experimental reflection coefficient as a function of the momentum and angle of incidence of the atom. The new version of the theory indicates that there is a substantial probability that an atom may reach the surface of the liquid without exciting any ripplons. The theory is not valid when the atom enters the liquid but analysis of the experiments shows that, once inside the liquid, the atom has a negligible chance of being scattered out again 20. Adsorption of SO{sub 2} on Li atoms deposited on MgO (1 0 0) surface: DFT calculations Energy Technology Data Exchange (ETDEWEB) Eid, Kh.M., E-mail: [email protected] [Physics Department, Faculty of Education, Ain Shams University, Cairo 11757 (Egypt); Ammar, H.Y. [Department of Physics, Faculty of Science, Najran University, Najran 1988 (Saudi Arabia) 2011-05-01 The adsorption of sulfur dioxide molecule (SO{sub 2}) on Li atom deposited on the surfaces of metal oxide MgO (1 0 0) on both anionic and defect (F{sub s}-center) sites located on various geometrical defects (terrace, edge and corner) has been studied using density functional theory (DFT) in combination with embedded cluster model. The adsorption energy (E{sub ads}) of SO{sub 2} molecule (S-atom down as well as O-atom down) in different positions on both of O{sup -2} and F{sub s} sites is considered. The spin density (SD) distribution due to the presence of Li atom is discussed. The geometrical optimizations have been done for the additive materials and MgO substrate surfaces (terrace, edge and corner). The oxygen vacancy formation energies have been evaluated for MgO substrate surfaces. The ionization potential (IP) for defect free and defect containing of the MgO surfaces has been calculated. The adsorption properties of SO{sub 2} are analyzed in terms of the E{sub ads}, the electron donation (basicity), the elongation of S-O bond length and the atomic charges on adsorbed materials. The presence of the Li atom increases the catalytic effect of the anionic O{sup -2} site of MgO substrate surfaces (converted from physisorption to chemisorption). On the other hand, the presence of the Li atom decreases the catalytic effect of the F{sub s}-site of MgO substrate surfaces. Generally, the SO{sub 2} molecule is strongly adsorbed (chemisorption) on the MgO substrate surfaces containing F{sub s}-center. 1. Electrode surface engineering by atomic layer deposition: A promising pathway toward better energy storage KAUST Repository Ahmed, Bilal 2016-04-29 Research on electrochemical energy storage devices including Li ion batteries (LIBs), Na ion batteries (NIBs) and supercapacitors (SCs) has accelerated in recent years, in part because developments in nanomaterials are making it possible to achieve high capacities and energy and power densities. These developments can extend battery life in portable devices, and open new markets such as electric vehicles and large-scale grid energy storage. It is well known that surface reactions largely determine the performance and stability of electrochemical energy storage devices. Despite showing impressive capacities and high energy and power densities, many of the new nanostructured electrode materials suffer from limited lifetime due to severe electrode interaction with electrolytes or due to large volume changes. Hence control of the surface of the electrode material is essential for both increasing capacity and improving cyclic stability of the energy storage devices.Atomic layer deposition (ALD) which has become a pervasive synthesis method in the microelectronics industry, has recently emerged as a promising process for electrochemical energy storage. ALD boasts excellent conformality, atomic scale thickness control, and uniformity over large areas. Since ALD is based on self-limiting surface reactions, complex shapes and nanostructures can be coated with excellent uniformity, and most processes can be done below 200. °C. In this article, we review recent studies on the use of ALD coatings to improve the performance of electrochemical energy storage devices, with particular emphasis on the studies that have provided mechanistic insight into the role of ALD in improving device performance. © 2016 Elsevier Ltd. 2. Relationship between Length and Surface-Enhanced Raman Spectroscopy Signal Strength in Metal Nanoparticle Chains: Ideal Models versus Nanofabrication Directory of Open Access Journals (Sweden) Kristen D. Alexander 2012-01-01 Full Text Available We have employed capillary force deposition on ion beam patterned substrates to fabricate chains of 60 nm gold nanospheres ranging in length from 1 to 9 nanoparticles. Measurements of the surface-averaged SERS enhancement factor strength for these chains were then compared to the numerical predictions. The SERS enhancement conformed to theoretical predictions in the case of only a few chains, with the vast majority of chains tested not matching such behavior. Although all of the nanoparticle chains appear identical under electron microscope observation, the extreme sensitivity of the SERS enhancement to nanoscale morphology renders current nanofabrication methods insufficient for consistent production of coupled nanoparticle chains. Notwithstanding this fact, the aggregate data also confirmed that nanoparticle dimers offer a large improvement over the monomer enhancement while conclusively showing that, within the limitations imposed by current state-of-the-art nanofabrication techniques, chains comprising more than two nanoparticles provide only a marginal signal boost over the already considerable dimer enhancement. 3. Atomic force microscopy measurements of topography and friction on dotriacontane films adsorbed on a SiO2 surface DEFF Research Database (Denmark) Trogisch, S.; Simpson, M.J.; Taub, H. 2005-01-01 We report comprehensive atomic force microscopy (AFM) measurements at room temperature of the nanoscale topography and lateral friction on the surface of thin solid films of an intermediate-length normal alkane, dotriacontane (n-C32H66), adsorbed onto a SiO2 surface. Our topographic and frictional... 4. Surface passivation of nano-textured fluorescent SiC by atomic layer deposited TiO2 DEFF Research Database (Denmark) Lu, Weifang; Ou, Yiyu; Jokubavicius, Valdas 2016-01-01 Nano-textured surfaces have played a key role in optoelectronic materials to enhance the light extraction efficiency. In this work, morphology and optical properties of nano-textured SiC covered with atomic layer deposited (ALD) TiO2 were investigated. In order to obtain a high quality surface fo... 5. Atomic and molecular adsorption on transition-metal carbide (111) surfaces from density-functional theory: a trend study of surface electronic factors DEFF Research Database (Denmark) Vojvodic, Aleksandra; Ruberto, C.; Lundqvist, Bengt 2010-01-01 ) surfaces. The spatial extent and the dangling bond nature of these SRs are supported by real-space analyses of the calculated Kohn-Sham wavefunctions. Then, atomic and molecular adsorption energies, geometries, and charge transfers are presented. An analysis of the adsorbate-induced changes in surface DOSs... 6. Atomic Structure of a Spinel-like Transition Al2O3 (100) Surface DEFF Research Database (Denmark) Jensen, Thomas Nørregaard; Meinander, Kristoffer; Helveg, Stig 2014-01-01 We study a crystalline epitaxial alumina thin film with the characteristics of a spinel-type transition Al2O3(100) surface by using atom-resolved noncontact atomic force microscopy and density functional theory. It is shown that the films are terminated by an Al-O layer rich in Al vacancies......, exhibiting a strong preference for surface hydroxyl group formation in two configurations. The transition alumina films are crystalline and perfectly stable in ambient atmospheres, a quality which is expected to open the door to new fundamental studies of the surfaces of transition aluminas.... 7. Control of in vivo disposition and immunogenicity of polymeric micelles by adjusting poly(sarcosine) chain lengths on surface Science.gov (United States) Kurihara, Kensuke; Ueda, Motoki; Hara, Isao; Ozeki, Eiichi; Togashi, Kaori; Kimura, Shunsaku 2017-07-01 Four kinds of A3B-type amphiphilic polydepsipeptides, (poly(sarcosine))3- b-poly( l-lactic acid) (the degree of polymerization of poly(sarcosine) are 10, 33, 55, and 85; S10 3 , S33 3 , S55 3 , and S85 3 ) were synthesized to prepare core-shell type polymeric micelles. Their in vivo dispositions and stimulations to trigger immune system to produce IgM upon multiple administrations to mice were examined. With increasing poly(sarcosine) chain lengths, the hydrophilic shell became thicker and the surface density at the most outer surface decreased on the basis of dynamic and static light scattering measurements. These two physical elements of polymeric micelles elicited opposite effects on the immune response in light of the chain length therefore to show an optimized poly(sarcosine) chain length existing between 33mer and 55mer to suppress the accelerated blood clearance phenomenon associated with polymeric micelles. 8. Rapid generation of protein aerosols and nanoparticles via surface acoustic wave atomization International Nuclear Information System (INIS) Alvarez, Mar; Friend, James; Yeo, Leslie Y 2008-01-01 We describe the fabrication of a surface acoustic wave (SAW) atomizer and show its ability to generate monodisperse aerosols and particles for drug delivery applications. In particular, we demonstrate the generation of insulin liquid aerosols for pulmonary delivery and solid protein nanoparticles for transdermal and gastrointestinal delivery routes using 20 MHz SAW devices. Insulin droplets around 3 μm were obtained, matching the optimum range for maximizing absorption in the alveolar region. A new approach is provided to explain these atomized droplet diameters by returning to fundamental physical analysis and considering viscous-capillary and inertial-capillary force balance rather than employing modifications to the Kelvin equation under the assumption of parametric forcing that has been extended to these frequencies in past investigations. In addition, we consider possible mechanisms by which the droplet ejections take place with the aid of high-speed flow visualization. Finally, we show that nanoscale protein particles (50-100 nm in diameter) were obtained through an evaporative process of the initial aerosol, the final size of which could be controlled merely by modifying the initial protein concentration. These results illustrate the feasibility of using SAW as a novel method for rapidly producing particles and droplets with a controlled and narrow size distribution. 9. Atom-resolved surface chemistry using scanning tunneling microscopy (STM) and spectroscopy (STS) International Nuclear Information System (INIS) Avouris, P. 1989-01-01 The author shows that by using STM and STS one can study chemistry with atomic resolution. The author uses two examples: the reaction of Si(111)-(7x7) with (a) NH 3 and (b) decaborane (DB). In case (a) the authors can directly observe the spatial distribution of the reaction. He determined which surface atoms have reacted and how the products of the reaction are distributed. He found that the different dangling-bond sites have significantly different reactivities and explain these differences in terms of the local electronic structure. In case (b) the 7x7 reconstruction is eliminated and at high temperatures, (√3 x √3) R30 degree reconstructions are observed. Depending on the amount of DB and the annealing temperature the √3 structures contain variable numbers of B and Si adatoms on T 4 -sites. Calculations show that the structure involving B adatoms, although kinetically favored, is not the lowest energy configuration. The lowest energy state involves B in a substitutional site under a Si adatom 10. Constructing Functional Ionic Membrane Surface by Electrochemically Mediated Atom Transfer Radical Polymerization Directory of Open Access Journals (Sweden) Fen Ran 2016-01-01 Full Text Available The sodium polyacrylate (PAANa contained polyethersulfone membrane that was fabricated by preparation of PES-NH2 via nonsolvent phase separation method, the introduction of bromine groups as active sites by grafting α-Bromoisobutyryl bromide, and surface-initiated electrochemically atom transfer radical polymerization (SI-eATRP of sodium acrylate (AANa on the surface of PES membrane. The polymerization could be controlled by reaction condition, such as monomer concentration, electric potential, polymerization time, and modifier concentration. The membrane surface was uniform when the monomer concentration was 0.9 mol/L, the electric potential was −0.12 V, the polymerization time was 8 h, and the modifier concentration was 2 wt.%. The membrane showed excellent hydrophilicity and blood compatibility. The water contact angle decreased from 84° to 68° and activated partial thromboplastin increased from 51 s to 84 s after modification of the membranes. 11. Kirchhoff approximation and closed-form expressions for atom-surface scattering International Nuclear Information System (INIS) Marvin, A.M. 1980-01-01 In this paper an approximate solution for atom-surface scattering is presented beyond the physical optics approximation. The potential is well represented by a hard corrugated surface but includes an attractive tail in front. The calculation is carried out analytically by two different methods, and the limit of validity of our formulas is well established in the text. In contrast with other workers, I find those expressions to be exact in both limits of small (Rayleigh region) and large momenta (classical region), with the correct behavior at the threshold. The result is attained through a particular use of the extinction theorem in writing the scattered amplitudes, hitherto not employed, and not for particular boundary values of the field. An explicit evaluation of the field on the surface shows in fact the present formulas to be simply related to the well known Kirchhoff approximation (KA) or more generally to an ''extended'' KA fit to the potential model above. A possible application of the theory to treat strong resonance-overlapping effects is suggested in the last part of the work 12. Atomic layer deposition in nanostructured photovoltaics: tuning optical, electronic and surface properties Science.gov (United States) Palmstrom, Axel F.; Santra, Pralay K.; Bent, Stacey F. 2015-07-01 Nanostructured materials offer key advantages for third-generation photovoltaics, such as the ability to achieve high optical absorption together with enhanced charge carrier collection using low cost components. However, the extensive interfacial areas in nanostructured photovoltaic devices can cause high recombination rates and a high density of surface electronic states. In this feature article, we provide a brief review of some nanostructured photovoltaic technologies including dye-sensitized, quantum dot sensitized and colloidal quantum dot solar cells. We then introduce the technique of atomic layer deposition (ALD), which is a vapor phase deposition method using a sequence of self-limiting surface reaction steps to grow thin, uniform and conformal films. We discuss how ALD has established itself as a promising tool for addressing different aspects of nanostructured photovoltaics. Examples include the use of ALD to synthesize absorber materials for both quantum dot and plasmonic solar cells, to grow barrier layers for dye and quantum dot sensitized solar cells, and to infiltrate coatings into colloidal quantum dot solar cell to improve charge carrier mobilities as well as stability. We also provide an example of monolayer surface modification in which adsorbed ligand molecules on quantum dots are used to tune the band structure of colloidal quantum dot solar cells for improved charge collection. Finally, we comment on the present challenges and future outlook of the use of ALD for nanostructured photovoltaics. 13. Evaluation of surface roughness of orthodontic wires by means of atomic force microscopy. Science.gov (United States) D'Antò, Vincenzo; Rongo, Roberto; Ametrano, Gianluca; Spagnuolo, Gianrico; Manzo, Paolo; Martina, Roberto; Paduano, Sergio; Valletta, Rosa 2012-09-01 To compare the surface roughness of different orthodontic archwires. Four nickel-titanium wires (Sentalloy(®), Sentalloy(®) High Aesthetic, Titanium Memory ThermaTi Lite(®), and Titanium Memory Esthetic(®)), three β-titanium wires (TMA(®), Colored TMA(®), and Beta Titanium(®)), and one stainless-steel wire (Stainless Steel(®)) were considered for this study. Three samples for each wire were analyzed by atomic force microscopy (AFM). Three-dimensional images were processed using Gwiddion software, and the roughness average (Ra), the root mean square (Rms), and the maximum height (Mh) values of the scanned surface profile were recorded. Statistical analysis was performed by one-way analysis of variance (ANOVA) followed by Tukey's post hoc test (P Sentalloy High Aesthetic was the roughest (Ra  =  133.5 ± 10.8; Rms  =  165.8 ± 9.8; Mh  =  949.6 ± 192.1) of the archwires. The surface quality of the wires investigated differed significantly. Ion implantation effectively reduced the roughness of TMA. Moreover, Teflon(®)-coated Titanium Memory Esthetic was less rough than was ion-implanted Sentalloy High Aesthetic. 14. Surface modification of nanodiamond through metal free atom transfer radical polymerization Science.gov (United States) Zeng, Guangjian; Liu, Meiying; Shi, Kexin; Heng, Chunning; Mao, Liucheng; Wan, Qing; Huang, Hongye; Deng, Fengjie; Zhang, Xiaoyong; Wei, Yen 2016-12-01 Surface modification of nanodiamond (ND) with poly(2-methacryloyloxyethyl phosphorylcholine) [poly(MPC)] has been achieved by using metal free surface initiated atom transfer radical polymerization (SI-ATRP). The ATRP initiator was first immobilized on the surface of ND through direct esterification reaction between hydroxyl group of ND and 2-bromoisobutyryl bromide. The initiator could be employed to obtain ND-poly(MPC) nanocomposites through SI-ATRP using an organic catalyst. The final functional materials were characterized by 1H nuclear magnetic resonance, transmission electron microscopy, X-ray photoelectron spectroscopy, Fourier transform infrared spectroscopy and thermo gravimetric analysis in detailed. All of these characterization results demonstrated that ND-poly(MPC) have been successfully obtained via metal free photo-initiated SI-ATRP. The ND-poly(MPC) nanocomposites shown enhanced dispersibility in various solvents as well as excellent biocompatibility. As compared with traditional ATRP, the metal free ATRP is rather simple and effective. More importantly, this preparation method avoided the negative influence of metal catalysts. Therefore, the method described in this work should be a promising strategy for fabrication of polymeric nanocomposites with great potential for different applications especially in biomedical fields. 15. Micropatterning of bacteria on two-dimensional lattice protein surface observed by atomic force microscopy International Nuclear Information System (INIS) Oh, Y.J.; Jo, W.; Lim, J.; Park, S.; Kim, Y.S.; Kim, Y. 2008-01-01 In this study, we characterized the two-dimensional lattice of bovine serum albumin (BSA) as a chemical and physical barrier against bacterial adhesion, using fluorescence microscopy and atomic force microscopy (AFM). The lattice of BSA on glass surface was fabricated by micro-contact printing (μCP), which is a useful way to pattern a wide range of molecules into microscale features on different types of substrates. The contact-mode AFM measurements showed that the average height of the printed BSA monolayer was 5-6 nm. Escherichia coli adhered rapidly on bare glass slide, while the bacterial adhesion was minimized on the lattices in the range of 1-3 μm 2 . Especially, the bacterial adhesion was completely inhibited on a 1 μm 2 lattice. The results suggest that the anti-adhesion effects are due by the steric repulsion forces exerted by BSA 16. Trajectory-dependent energy loss for swift He atoms axially scattered off a silver surface Energy Technology Data Exchange (ETDEWEB) Ríos Rubiano, C.A. [Instituto de Astronomía y Física del Espacio (CONICET-UBA), Casilla de correo 67, sucursal 28, 1428 Buenos Aires (Argentina); Bocan, G.A. [Centro Atómico Bariloche, Comisión Nacional de Energía Ató mica, and Consejo Nacional de Investigaciones Científicas y Técnicas, S.C. de Bariloche, Río Negro (Argentina); Juaristi, J.I. [Departamento de Física de Materiales, Facultad de Químicas, UPV/EHU, 20018 San Sebastián (Spain); Donostia International Physics Center (DIPC) and Centro de Física de Materiales CFM/MPC (CSIC-UPV/EHU), 20018 San Sebastián (Spain); Gravielle, M.S., E-mail: [email protected] [Instituto de Astronomía y Física del Espacio (CONICET-UBA), Casilla de correo 67, sucursal 28, 1428 Buenos Aires (Argentina) 2014-12-01 Angle- and energy-loss-resolved distributions of helium atoms grazingly scattered from a Ag(110) surface along low indexed crystallographic directions are investigated considering impact energies in the few keV range. Final projectile distributions are evaluated within a semi-classical formalism that includes dissipative effects due to electron–hole excitations through a friction force. For mono-energetic beams impinging along the [11{sup ¯}0],[11{sup ¯}2] and [001] directions, the model predicts the presence of multiple peak structures in energy-loss spectra. Such structures provide detailed information about the trajectory-dependent energy loss. However, when the experimental dispersion of the incident beam is taken into account, these energy-loss peaks are completely washed out, giving rise to a smooth energy-loss distribution, in fairly good agreement with available experimental data. 17. Scanning tunneling microscopy of the atomically smooth (001) surface of vanadium pentoxide V_2O_5 crystals International Nuclear Information System (INIS) Muslimov, A. E.; Butashin, A. V.; Kanevsky, V. M. 2017-01-01 The (001) cleavage surface of vanadium pentoxide (V_2O_5) crystal has been studied by scanning tunneling spectroscopy (STM). It is shown that the surface is not reconstructed; the STM image allows geometric lattice parameters to be determined with high accuracy. The nanostructure formed on the (001) cleavage surface of crystal consists of atomically smooth steps with a height multiple of unit-cell parameter c = 4.37 Å. The V_2O_5 crystal cleavages can be used as references in calibration of a scanning tunneling microscope under atmospheric conditions both along the (Ñ…, y) surface and normally to the sample surface (along the z axis). It is found that the terrace surface is not perfectly atomically smooth; its roughness is estimated to be ~0.5 Å. This circumstance may introduce an additional error into the microscope calibration along the z coordinate. 18. Influence of the side chain and substrate on polythiophene thin film surface, bulk, and buried interfacial structures. Science.gov (United States) Xiao, Minyu; Jasensky, Joshua; Zhang, Xiaoxian; Li, Yaoxin; Pichan, Cayla; Lu, Xiaolin; Chen, Zhan 2016-08-10 The molecular structures of organic semiconducting thin films mediate the performance of various devices composed of such materials. To fully understand how the structures of organic semiconductors alter on substrates due to different polymer side chains and different interfacial interactions, thin films of two kinds of polythiophene derivatives with different side-chains, poly(3-hexylthiophene) (P3HT) and poly(3-potassium-6-hexanoate thiophene) (P3KHT), were deposited and compared on various surfaces. A combination of analytical tools was applied in this research: contact angle goniometry and X-ray photoelectron spectroscopy (XPS) were used to characterize substrate dielectric surfaces with varied hydrophobicity for polymer film deposition; X-ray diffraction and UV-vis spectroscopy were used to examine the polythiophene film bulk structure; sum frequency generation (SFG) vibrational spectroscopy was utilized to probe the molecular structures of polymer film surfaces in air and buried solid/solid interfaces. Both side-chain hydrophobicity and substrate hydrophobicity were found to mediate the crystallinity of the polythiophene film, as well as the orientation of the thiophene ring within the polymer backbone at the buried polymer/substrate interface and the polymer thin film surface in air. For the same type of polythiophene film deposited on different substrates, a more hydrophobic substrate surface induced thiophene ring alignment with the surface normal at both the buried interface and on the surface in air. For different films (P3HT vs. P3KHT) deposited on the same dielectric substrate, a more hydrophobic polythiophene side chain caused the thiophene ring to align more towards the surface at the buried polymer/substrate interface and on the surface in air. We believe that the polythiophene surface, bulk, and buried interfacial molecular structures all influence the hole mobility within the polythiophene film. Successful characterization of an organic conducting 19. PREFACE: International Conference on Many Particle Spectroscopy of Atoms, Molecules, Clusters and Surfaces (MPS2014) Science.gov (United States) Ancarani, Lorenzo Ugo 2015-04-01 This volume contains a collection of contributions from the invited speakers at the 2014 edition of the International Conference on Many Particle Spectroscopy of Atoms, Molecules, Clusters and Surfaces held in Metz, France, from 15th to 18th July 2014. This biennial conference alternates with the ICPEAC satellite International Symposium on (e,2e), Double Photoionization and Related Topics, and is concerned with experimental and theoretical studies of radiation interactions with matter. These include many-body and electron-electron correlation effects in excitation, and in single and multiple ionization of atoms, molecules, clusters and surfaces with various projectiles: electrons, photons and ions. More than 80 scientists, from 19 different countries around the world, came together to discuss the most recent progress on these topics. The scientific programme included 28 invited talks and a poster session extending over the three days of the meeting. Amongst the 51 posters, 11 have been selected and were advertised through short talks. Besides, Professor Nora Berrah gave a talk in memory of Professor Uwe Becker who sadly passed away shortly after co-chairing the previous edition of this conference. Financial support from the Institut Jean Barriol, Laboratoire SRSMC, Groupement de Recherche THEMS (CNRS), Ville de Metz, Metz Métropole, Conseil Général de la Moselle and Région Lorraine is gratefully acknowledged. Finally, I would like to thank the members of the local committee and the staff of the Université de Lorraine for making the conference run smoothly, the International Advisory Board for building up the scientific programme, the sessions chairpersons, those who gave their valuable time in carefully refereeing the articles of this volume and last, but not least, all participants for contributing to lively and fruitful discussions throughout the meeting. 20. Features of static and dynamic friction profiles in one and two dimensions on polymer and atomically flat surfaces using atomic force microscopy International Nuclear Information System (INIS) Watson, G S; Watson, J A 2008-01-01 In this paper we correlate the Atomic Force Microscope probe movement with surface location while scanning in the imaging and Force versus distance modes. Static and dynamic stick-slip processes are described on a scale of nanometres to microns on a range of samples. We demonstrate the limits and range of the tip apex being fixed laterally in the force versus distance mode and static friction slope dependence on probe parameters. Micron scale static and dynamic friction can be used to purposefully manipulate soft surfaces to produce well defined frictional gradients 1. Ultracold atoms on atom chips DEFF Research Database (Denmark) Krüger, Peter; Hofferberth, S.; Haller, E. 2005-01-01 Miniaturized potentials near the surface of atom chips can be used as flexible and versatile tools for the manipulation of ultracold atoms on a microscale. The full scope of possibilities is only accessible if atom-surface distances can be reduced to microns. We discuss experiments in this regime... 2. Atomic and electronic structure of the CdTe(111)B–(2√3 × 4) orthogonal surface Energy Technology Data Exchange (ETDEWEB) Bekenev, V. L., E-mail: [email protected]; Zubkova, S. M. [National Academy of Sciences of Ukraine, Frantsevych Institute for Problems of Materials Science (Ukraine) 2017-01-15 The atomic and electronic structure of four variants of Te-terminated CdTe(111)B–(2√3 × 4) orthogonal polar surface (ideal, relaxed, reconstructed, and reconstructed with subsequent relaxation) are calculated ab initio for the first time. The surface is modeled by a film composed of 12 atomic layers with a vacuum gap of ~16 Å in the layered superlattice approximation. To close Cd dangling bonds on the opposite side of the film, 24 fictitious hydrogen atoms with a charge of 1.5 electrons each are added. Ab initio calculations are performed using the Quantum Espresso program based on density functional theory. It is demonstrated that relaxation leads to splitting of the four upper layers. The band energy structures and total and layer-by-layer densities of electronic states for the four surface variants are calculated and analyzed. 3. Atomic Scale coexistence of Periodic and quasiperiodic order in a2-fold A1-Ni-Co decagonal quasicrystal surface Energy Technology Data Exchange (ETDEWEB) Park, Jeong Young; Ogletree, D. Frank; Salmeron, Miquel; Ribeiro,R.A.; Canfield, P.C.; Jenks, C.J.; Thiel, P.A. 2005-11-14 Decagonal quasicrystals are made of pairs of atomic planes with pentagonal symmetry periodically stacked along a 10-fold axis. We have investigated the atomic structure of the 2-fold surface of a decagonal Al-Ni-Co quasicrystal using scanning tunneling microscopy (STM). The surface consists of terraces separated by steps of heights 1.9, 4.7, 7.8, and 12.6{angstrom} containing rows of atoms parallel to the 10-fold direction with an internal periodicity of 4{angstrom}. The rows are arranged aperiodically, with separations that follow a Fibonacci sequence and inflation symmetry. The results indicate that the surfaces are preferentially Al-terminated and in general agreement with bulk models. 4. Effect of ozone concentration on silicon surface passivation by atomic layer deposited Al2O3 International Nuclear Information System (INIS) Gastrow, Guillaume von; Li, Shuo; Putkonen, Matti; Laitinen, Mikko; Sajavaara, Timo; Savin, Hele 2015-01-01 Highlights: • The ALD Al 2 O 3 passivation quality can be controlled by the ozone concentration. • Ozone concentration affects the Si/Al 2 O 3 interface charge and defect density. • A surface recombination velocity of 7 cm/s is reached combining ozone and water ALD. • Carbon and hydrogen concentrations correlate with the surface passivation quality. - Abstract: We study the impact of ozone-based Al 2 O 3 Atomic Layer Deposition (ALD) on the surface passivation quality of crystalline silicon. We show that the passivation quality strongly depends on the ozone concentration: the higher ozone concentration results in lower interface defect density and thereby improved passivation. In contrast to previous studies, our results reveal that too high interface hydrogen content can be detrimental to the passivation. The interface hydrogen concentration can be optimized by the ozone-based process; however, the use of pure ozone increases the harmful carbon concentration in the film. Here we demonstrate that low carbon and optimal hydrogen concentration can be achieved by a single process combining the water- and ozone-based reactions. This process results in an interface defect density of 2 × 10 11 eV −1 cm −2 , and maximum surface recombination velocities of 7.1 cm/s and 10 cm/s, after annealing and after an additional firing at 800 °C, respectively. In addition, our results suggest that the effective oxide charge density can be optimized in a simple way by varying the ozone concentration and by injecting water to the ozone process. 5. Protective capping and surface passivation of III-V nanowires by atomic layer deposition Directory of Open Access Journals (Sweden) Veer Dhaka 2016-01-01 Full Text Available Low temperature (∼200 °C grown atomic layer deposition (ALD films of AlN, TiN, Al2O3, GaN, and TiO2 were tested for protective capping and surface passivation of bottom-up grown III-V (GaAs and InP nanowires (NWs, and top-down fabricated InP nanopillars. For as-grown GaAs NWs, only the AlN material passivated the GaAs surface as measured by photoluminescence (PL at low temperatures (15K, and the best passivation was achieved with a few monolayer thick (2Å film. For InP NWs, the best passivation (∼2x enhancement in room-temperature PL was achieved with a capping of 2nm thick Al2O3. All other ALD capping layers resulted in a de-passivation effect and possible damage to the InP surface. Top-down fabricated InP nanopillars show similar passivation effects as InP NWs. In particular, capping with a 2 nm thick Al2O3 layer increased the carrier decay time from 251 ps (as-etched nanopillars to about 525 ps. Tests after six months ageing reveal that the capped nanostructures retain their optical properties. Overall, capping of GaAs and InP NWs with high-k dielectrics AlN and Al2O3 provides moderate surface passivation as well as long term protection from oxidation and environmental attack. 6. Protective capping and surface passivation of III-V nanowires by atomic layer deposition Energy Technology Data Exchange (ETDEWEB) Dhaka, Veer, E-mail: [email protected]; Perros, Alexander; Kakko, Joona-Pekko; Haggren, Tuomas; Lipsanen, Harri [Department of Micro- and Nanosciences, Micronova, Aalto University, P.O. Box 13500, FI-00076 (Finland); Naureen, Shagufta; Shahid, Naeem [Research School of Physics & Engineering, Department of Electronic Materials Engineering, Australian National University, Canberra ACT 2601 (Australia); Jiang, Hua; Kauppinen, Esko [Department of Applied Physics and Nanomicroscopy Center, Aalto University, P.O. Box 15100, FI-00076 (Finland); Srinivasan, Anand [School of Information and Communication Technology, KTH Royal Institute of Technology, Electrum 229, S-164 40 Kista (Sweden) 2016-01-15 Low temperature (∼200 °C) grown atomic layer deposition (ALD) films of AlN, TiN, Al{sub 2}O{sub 3}, GaN, and TiO{sub 2} were tested for protective capping and surface passivation of bottom-up grown III-V (GaAs and InP) nanowires (NWs), and top-down fabricated InP nanopillars. For as-grown GaAs NWs, only the AlN material passivated the GaAs surface as measured by photoluminescence (PL) at low temperatures (15K), and the best passivation was achieved with a few monolayer thick (2Å) film. For InP NWs, the best passivation (∼2x enhancement in room-temperature PL) was achieved with a capping of 2nm thick Al{sub 2}O{sub 3}. All other ALD capping layers resulted in a de-passivation effect and possible damage to the InP surface. Top-down fabricated InP nanopillars show similar passivation effects as InP NWs. In particular, capping with a 2 nm thick Al{sub 2}O{sub 3} layer increased the carrier decay time from 251 ps (as-etched nanopillars) to about 525 ps. Tests after six months ageing reveal that the capped nanostructures retain their optical properties. Overall, capping of GaAs and InP NWs with high-k dielectrics AlN and Al{sub 2}O{sub 3} provides moderate surface passivation as well as long term protection from oxidation and environmental attack. 7. Synthesis and characterization of TiO2/Ag/polymer ternary nanoparticles via surface-initiated atom transfer radical polymerization International Nuclear Information System (INIS) Park, Jung Tae; Koh, Joo Hwan; Seo, Jin Ah; Cho, Yong Soo; Kim, Jong Hak 2011-01-01 We report on the novel ternary hybrid materials consisting of semiconductor (TiO 2 ), metal (Ag) and polymer (poly(oxyethylene methacrylate) (POEM)). First, a hydrophilic polymer, i.e. POEM, was grafted from TiO 2 nanoparticles via the surface-initiated atom transfer radical polymerization (ATRP) technique. These TiO 2 -POEM brush nanoparticles were used to template the formation of Ag nanoparticles by introduction of a AgCF 3 SO 3 precursor and a NaBH 4 aqueous solution for reduction process. Successful grafting of polymeric chains from the surface of TiO 2 nanoparticles and the in situ formation of Ag nanoparticles within the polymeric chains were confirmed using transmission electron microscopy (TEM), UV-vis spectroscopy, X-ray diffraction (XRD) and X-ray photoelectron spectroscopy (XPS). FT-IR spectroscopy also revealed the specific interaction of Ag nanoparticles with the C=O groups of POEM brushes. This study presents a simple route for the in situ synthesis of both metal and polymer confined within the semiconductor, producing ternary hybrid inorganic-organic nanomaterials. 8. Catalytic Activity and Stability of Oxides: The Role of Near-Surface Atomic Structures and Compositions. Science.gov (United States) Feng, Zhenxing; Hong, Wesley T; Fong, Dillon D; Lee, Yueh-Lin; Yacoby, Yizhak; Morgan, Dane; Shao-Horn, Yang 2016-05-17 Electrocatalysts play an important role in catalyzing the kinetics for oxygen reduction and oxygen evolution reactions for many air-based energy storage and conversion devices, such as metal-air batteries and fuel cells. Although noble metals have been extensively used as electrocatalysts, their limited natural abundance and high costs have motivated the search for more cost-effective catalysts. Oxides are suitable candidates since they are relatively inexpensive and have shown reasonably high activity for various electrochemical reactions. However, a lack of fundamental understanding of the reaction mechanisms has been a major hurdle toward improving electrocatalytic activity. Detailed studies of the oxide surface atomic structure and chemistry (e.g., cation migration) can provide much needed insights for the design of highly efficient and stable oxide electrocatalysts. In this Account, we focus on recent advances in characterizing strontium (Sr) cation segregation and enrichment near the surface of Sr-substituted perovskite oxides under different operating conditions (e.g., high temperature, applied potential), as well as their influence on the surface oxygen exchange kinetics at elevated temperatures. We contrast Sr segregation, which is associated with Sr redistribution in the crystal lattice near the surface, with Sr enrichment, which involves Sr redistribution via the formation of secondary phases. The newly developed coherent Bragg rod analysis (COBRA) and energy-modulated differential COBRA are uniquely powerful ways of providing information about surface and interfacial cation segregation at the atomic scale for these thin film electrocatalysts. In situ ambient pressure X-ray photoelectron spectroscopy (APXPS) studies under electrochemical operating conditions give additional insights into cation migration. Direct COBRA and APXPS evidence for surface Sr segregation was found for La1-xSrxCoO3-δ and (La1-ySry)2CoO4±δ/La1-xSrxCoO3-δ oxide thin films, and 9. Catalytic Activity and Stability of Oxides: The Role of Near-Surface Atomic Structures and Compositions KAUST Repository Feng, Zhenxing 2016-05-05 Conspectus Electrocatalysts play an important role in catalyzing the kinetics for oxygen reduction and oxygen evolution reactions for many air-based energy storage and conversion devices, such as metal–air batteries and fuel cells. Although noble metals have been extensively used as electrocatalysts, their limited natural abundance and high costs have motivated the search for more cost-effective catalysts. Oxides are suitable candidates since they are relatively inexpensive and have shown reasonably high activity for various electrochemical reactions. However, a lack of fundamental understanding of the reaction mechanisms has been a major hurdle toward improving electrocatalytic activity. Detailed studies of the oxide surface atomic structure and chemistry (e.g., cation migration) can provide much needed insights for the design of highly efficient and stable oxide electrocatalysts. In this Account, we focus on recent advances in characterizing strontium (Sr) cation segregation and enrichment near the surface of Sr-substituted perovskite oxides under different operating conditions (e.g., high temperature, applied potential), as well as their influence on the surface oxygen exchange kinetics at elevated temperatures. We contrast Sr segregation, which is associated with Sr redistribution in the crystal lattice near the surface, with Sr enrichment, which involves Sr redistribution via the formation of secondary phases. The newly developed coherent Bragg rod analysis (COBRA) and energy-modulated differential COBRA are uniquely powerful ways of providing information about surface and interfacial cation segregation at the atomic scale for these thin film electrocatalysts. In situ ambient pressure X-ray photoelectron spectroscopy (APXPS) studies under electrochemical operating conditions give additional insights into cation migration. Direct COBRA and APXPS evidence for surface Sr segregation was found for La1–xSrxCoO3−δ and (La1–ySry)2CoO4±δ/La1–xSrxCoO3 10. Atomic-Scale Observations of (010) LiFePO4 Surfaces Before and After Chemical Delithiation. Science.gov (United States) Kobayashi, Shunsuke; Fisher, Craig A J; Kato, Takeharu; Ukyo, Yoshio; Hirayama, Tsukasa; Ikuhara, Yuichi 2016-09-14 The ability to view directly the surface structures of battery materials with atomic resolution promises to dramatically improve our understanding of lithium (de)intercalation and related processes. Here we report the use of state-of-the-art scanning transmission electron microscopy techniques to probe the (010) surface of commercially important material LiFePO4 and compare the results with theoretical models. The surface structure is noticeably different depending on whether Li ions are present in the topmost surface layer or not. Li ions are also found to migrate back to surface regions from within the crystal relatively quickly after partial delithiation, demonstrating the facile nature of Li transport in the [010] direction. The results are consistent with phase transformation models involving metastable phase formation and relaxation, providing atomic-level insights into these fundamental processes. 11. The transition of a Gaussian chain end-grafted at a penetrable surface NARCIS (Netherlands) Skvortsov, A.M.; Klusken, L.I.; Male, van J.; Leermakers, F.A.M. 2000-01-01 A Gaussian chain at a liquid–liquid interface is considered. The solvents are represented by an external potential field u that has a constant value in one half-space and is zero elsewhere. One end of the chain is fixed at the boundary where the external potential field changes its value. For this 12. Chain polymerization of diacetylene compound multilayer films on the topmost surface initiated by a scanning tunneling microscope tip. Science.gov (United States) Takajo, Daisuke; Okawa, Yuji; Hasegawa, Tsuyoshi; Aono, Masakazu 2007-05-08 Chain polymerizations of diacetylene compound multilayer films on graphite substrates were examined with a scanning tunneling microscope (STM) at the liquid/solid interface of the phenyloctane solution. The first layer grew very quickly into many small domains. This was followed by the slow formation of the piled up layers into much larger domains. Chain polymerization on the topmost surface layer could be initiated by applying a pulsed voltage between the STM tip and the substrate, usually producing a long polymer of submicrometer length. In contrast, polymerizations on the underlying layer were never observed. This can be explained by a conformation model in which the polymer backbone is lifted up. 13. Electronic and spectroscopic properties of early 3d metal atoms on a graphite surface Science.gov (United States) Rakotomahevitra, A.; Garreau, G.; Demangeat, C.; Parlebas, J. C. 1995-07-01 High-sensitivity magneto-optic Kerr effect experiments failed to detect manifestations of magnetism in epitaxial films of V on Ag(100) substrates. More recently V 3s XPS of freshly evaporated V clusters on graphite exhibited the appearance of a satellite structure which has then been interpreted by the effect of surface magnetic moments on V. It is the absence of unambiguous results on the electronic properties of early 3d supported metals that prompts us to examine the problem. Our purpose is twofold. In a first part, after a total energy calculation within a tight-binding method which yields the equilibrium position of a given adatom, we use the Hartree-Fock approximation to find out a possible magnetic solution of V (or Cr) upon graphite for a reasonable value of the exchange integral Jdd. In a second part the informations given by the density of states of the graphite surface as well as the additional states of the adsorbed atom are taken into account through a generalised impurity Anderson Hamiltonian which incorporates the various Coulomb and exchange interactions necessary to analyse the 3s XPS results. 14. Predicting Ligand Binding Sites on Protein Surfaces by 3-Dimensional Probability Density Distributions of Interacting Atoms Science.gov (United States) Jian, Jhih-Wei; Elumalai, Pavadai; Pitti, Thejkiran; Wu, Chih Yuan; Tsai, Keng-Chang; Chang, Jeng-Yih; Peng, Hung-Pin; Yang, An-Suei 2016-01-01 Predicting ligand binding sites (LBSs) on protein structures, which are obtained either from experimental or computational methods, is a useful first step in functional annotation or structure-based drug design for the protein structures. In this work, the structure-based machine learning algorithm ISMBLab-LIG was developed to predict LBSs on protein surfaces with input attributes derived from the three-dimensional probability density maps of interacting atoms, which were reconstructed on the query protein surfaces and were relatively insensitive to local conformational variations of the tentative ligand binding sites. The prediction accuracy of the ISMBLab-LIG predictors is comparable to that of the best LBS predictors benchmarked on several well-established testing datasets. More importantly, the ISMBLab-LIG algorithm has substantial tolerance to the prediction uncertainties of computationally derived protein structure models. As such, the method is particularly useful for predicting LBSs not only on experimental protein structures without known LBS templates in the database but also on computationally predicted model protein structures with structural uncertainties in the tentative ligand binding sites. PMID:27513851 15. Atomic Layer-Deposited TiO2 Coatings on NiTi Surface Science.gov (United States) Vokoun, D.; Racek, J.; Kadeřávek, L.; Kei, C. C.; Yu, Y. S.; Klimša, L.; Šittner, P. 2018-02-01 NiTi shape-memory alloys may release poisonous Ni ions at the alloys' surface. In an attempt to prepare a well-performing surface layer on an NiTi sample, the thermally grown TiO2 layer, which formed during the heat treatment of NiTi, was removed and replaced with a new TiO2 layer prepared using the atomic layer deposition (ALD) method. Using x-ray photoelectron spectroscopy, it was found that the ALD layer prepared at as low a temperature as 100 °C contained Ti in oxidation states + 4 and + 3. As for static corrosion properties of the ALD-coated NiTi samples, they further improved compared to those covered by thermally grown oxide. The corrosion rate of samples with thermally grown oxide was 1.05 × 10-5 mm/year, whereas the corrosion rate of the ALD-coated samples turned out to be about five times lower. However, cracking of the ALD coating occurred at about 1.5% strain during the superelastic mechanical loading in tension taking place via the propagation of a localized martensite band. 16. HBr Formation from the Reaction between Gas-phase Bromine Atom and Vibrationally Excited Chemisorbed Hydrogen Atoms on a Si(001)-(2 x 1) Surface International Nuclear Information System (INIS) Ree, J.; Yoon, S. H.; Park, K. G.; Kim, Y. H. 2004-01-01 We have calculated the probability of HBr formation and energy disposal of the reaction exothermicity in HBr produced from the reaction of gas-phase bromine with highly covered chemisorbed hydrogen atoms on a Si (001)-(2 x 1) surface. The reaction probability is about 0.20 at gas temperature 1500 K and surface temperature 300 K. Raising the initial vibrational state of the adsorbate(H)-surface(Si) bond from the ground to v = 1, 2 and 3 states causes the vibrational, translational and rotational energies of the product HBr to increase equally. However, the vibrational and translational motions of product HBr share most of the reaction energy. Vibrational population of the HBr molecules produced from the ground state adsorbate-surface bond (vHSi = 0) follows the Boltzmann distribution, but it deviates seriously from the Boltzmann distribution when the initial vibrational energy of the adsorbate-surface bond increases. When the vibration of the adsorbate-surface bond is in the ground state, the amount of energy dissipated into the surface is negative, while it becomes positive as vHSi increases. The energy distributions among the various modes weakly depends on surface temperature in the range of 0-600 K, regardless of the initial vibrational state of H(ad)-Si(s) bond 17. Changes in surface morphology and microcrack initiation in polymers under simultaneous exposure to stress and fast atom bombardment International Nuclear Information System (INIS) Michael, R.S.; Frank, S.; Stulik, D.; Dickinson, J.T. 1987-01-01 The authors present studies of the changes in surface morphology due to simultaneous exposure of polymers to stress and fast atom bombardment. The polymers examined were Teflon, Kapton, Nylon, and Kevlar-49. The incident particles were 6 keV xenon atoms. The authors show that in the presence of mechanical stress these polymers show topographical changes at particle doses considerably lower than similar changes produced on unstressed material. Applied stress also promotes the formation of surface microcracks which could greatly reduce mechanical strength of the material 18. Nanoscale fabrication and characterization of chemically modified silicon surfaces using conductive atomic force microscopy in liquids Science.gov (United States) Kinser, Christopher Reagan This dissertation examines the modification and characterization of hydrogen-terminated silicon surfaces in organic liquids. Conductive atomic force microscope (cAFM) lithography is used to fabricate structures with sub-100 nm line width on H:Si(111) in n-alkanes, 1-alkenes, and 1-alkanes. Nanopatterning is accomplished by applying a positive (n-alkanes and 1-alkenes) or a negative (1-alkanes) voltage pulse to the silicon substrate with the cAFM tip connected to ground. The chemical and kinetic behavior of the patterned features is characterized using AFM, lateral force microscopy, time-of-flight secondary ion mass spectroscopy (TOF SIMS), and chemical etching. Features patterned in hexadecane, 1-octadecene, and undecylenic acid methyl ester exhibited chemical and kinetic behavior consistent with AFM field induced oxidation. The oxide features are formed due to capillary condensation of a water meniscus at the AFM tip-sample junction. A space-charge limited growth model is proposed to explain the observed growth kinetics. Surface modifications produced in the presence of neat 1-dodecyne and 1-octadecyne exhibited a reduced lateral force compared to the background H:Si(111) substrate and were resistant to a hydrofluoric acid etch, characteristics which indicate that the patterned features are not due to field induced oxidation and which are consistent with the presence of the methyl-terminated 1-alkyne bound directly to the silicon surface through silicon-carbon bonds. In addition to the cAFM patterned surfaces, full monolayers of undecylenic acid methyl ester (SAM-1) and undec-10-enoic acid 2-bromoethyl ester (SAM-2) were grown on H:Si(111) substrates using ultraviolet light. The structure and chemistry of the monolayers were characterized using AFM, TOF SIMS, X-ray photoelectron spectroscopy (XPS), X-ray reflectivity (XRR), X-ray standing waves (XSW), and X-ray fluorescence (XRF). These combined analyses provide evidence that SAM-1 and SAM-2 form dense monolayers 19. Hydrogel brushes grafted from stainless steel via surface-initiated atom transfer radical polymerization for marine antifouling International Nuclear Information System (INIS) Wang, Jingjing; Wei, Jun 2016-01-01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7488967180252075, "perplexity": 4141.206337369323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247487595.4/warc/CC-MAIN-20190218155520-20190218181520-00099.warc.gz"}
http://math.stackexchange.com/questions/327617/cournot-nash-equilibrium-in-duopoly
# Cournot-Nash Equilibrium in Duopoly This is a homework question, but resources online are exceedingly complicated, so I was hoping there was a fast, efficient way of solving the following question: There are 2 firms in an industry, which have the following total cost functions and inverse demand functions. \begin{align*} \text{Firm 1}: &C_1 =50 Q_1\\ &P_1 = 100 \,– 0.5 (Q_1 + Q_2)\\ \text{Firm 2}: &C_2 = 24 Q_2\\ &P2 = 100 \, – 0.5 (Q_1 + Q_2) \end{align*} What is the Cournot-Nash equilibrium for this industry? I've tried to solve this dozens of times. My idea was to find the profit equation for both, take the derivative, set equal to zero, and then solve for $Q_1$ and $Q_2$. Doing this, I get: $$Q_1 = -5Q_2 + 500\\ Q_2 = -5Q_1 + 760$$ Plug $Q_2$ equation in Firm 1's equation and solve, but I keep getting that $Q_1$ should equal $137.5$, which is not the correct answer. - There is a standard way of solving for $Q_1$ and $Q_2$. 1. Determine the profit functions. 2. Determine the best response function for the firms. 3. Substitute $Q_1$ or $Q_2$ in the other profit function and solve. All these steps are already mentioned, so you know what to do. Below you can search for your mistake. The profit function for firm 1 equals $\Pi_1= P_1Q_1-C_1=Q_1 \cdot (100-0.5(Q_1+Q_2)) - 50Q_1$ The profit function for firm 2 equals $\Pi_2=P_2Q_2-C_2=Q_2 \cdot (100-0.5(Q_1+Q_2)) - 24Q_2$ The best response function can be determined by deriving the profit function of firm 1 w.r.t. $Q_1$ and for firm 2 w.r.t. $Q_2$ and set them equal to zero $$\frac{\partial \Pi_1}{\partial Q_1}=100-Q_1-0.5Q_2-50=50-Q_1-0.5Q_2=0$$ $$\implies Q_1=50-0.5Q_2$$ $$\frac{\partial \Pi_2}{\partial Q_2}=100-Q_2-0.5Q_1-24=76-Q_2-0.5Q_1=0$$ Now we can make the substitution $$76-Q_2-0.5 \cdot (50-0.5Q_2)=0$$ $$\implies 51-Q_2+0.25Q_2=0 \implies 0.75Q_2=51$$ And thus we find $Q_2=68$ and can solve easily for $Q_1$ $$Q_2=68 \ \text{and} \ Q_1=50-0.5 \cdot 68=16$$ - This is perfect. Thank you. –  Parseltongue Mar 11 '13 at 23:44 Your welcome :) –  Bob Mar 11 '13 at 23:53 To solve with Mathematica, see example in Simple Cournot example in Mathematica. Your best response functions $Q1 = -5Q2 + 500$, $Q2 = -5Q1 + 760$ are incorrect. For $P=a-b(Q_1+Q_2)$ and $C_1(Q_1)=c\cdot Q_1$, the profit function is $\Pi(Q_1,Q_2)=(a-bQ_2-c_1) Q_1-bQ_1^2$, so the best response is $Q_1(Q_2)=\frac{a-bQ_2-c_1}{2b}$. In your case, $a=100$, $b=0.5$, $c_1=50$, which results in $Q_1(Q_2)=\frac{100-50}{2\cdot0.5}-\frac{1}{2}Q_2$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8842576742172241, "perplexity": 302.18046874106113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997877693.48/warc/CC-MAIN-20140722025757-00074-ip-10-33-131-23.ec2.internal.warc.gz"}
http://mathhelpforum.com/pre-calculus/187845-hard-factoring-problem-print.html
# Hard factoring problem • Sep 12th 2011, 12:34 PM infraRed Hard factoring problem Hi. I had this practice problem for learning factoring: $4a^2c^2-(a^2-b^2+c^2)^2$ I eventually gave up, but the answer was: $(a+b+c)(a+b-c)(a-b+c)(b-a+c)$ I'm wondering if anyone understands what steps were taken. (Which unfortunately were not described on the answer page.) I tried distributing, grouping, and factoring formulas, but the closest I got (if you want to call it progress) was: $(a^2+b^2)^2+c(3a^2c-a^2+b^2+b^2c-c^3)$ Anyway, I'm about at the point now where I can hardly see the numbers clearly... • Sep 12th 2011, 12:41 PM anonimnystefy Re: Hard factoring problem hi infraRed you can write that like this: $(2ac)^2-(a^2-b^2+c^2)^2$ try to factor this expression with difference of the squares method.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8470919132232666, "perplexity": 2564.247975853394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948512054.0/warc/CC-MAIN-20171211014442-20171211034442-00120.warc.gz"}
http://heidyk.com/blog/?p=70
# CPS vs. ANF Posted on by Heidy So it seems as if I’m doing a bit better, I’m pretty glad that this cold is rather minor. This is possibly the last post that I’ll be writing until I return to Tallahassee. Seeing that I’ll be landing in Jacksonville first, I have to rest, repack all my things, and then drive to Tallahassee before settling down and writing any future posts. I finished reading the “Compiling with Continuations, Continued” paper, which introduces CPS languages that utilize “contification transformation” and demonstrates an efficient graph-based representation of CPS with linear-time shrinking reductions. The languages also make a syntactic distinction between the heap allocated functions and local continuations, which can be compiled as jumps. I won’t be discussing the details of the CPS languages, but the addressed advantages of CPS(Continuation-passing style) over ANF(Administrative Normal Form). If you’d like to know more about the introduced CPS languages, I highly suggest reading this paper as it is somewhat of an easy-read. The author of this paper, Kennedy,  puts forward an argument that ANF is not infact beneficially equivalent to CPS. As I mentioned in my previous blog post, Sabry and Felleisen claimed that that “${\beta\eta}$ + A-reductions on a call-by-value language is equivalent to ${\beta \eta}$ on a CPS’ed call-by-name language”, but Kennedy argues that the normal form is not preserved. When utilizing transformations such as function inlining, a more complex form of ${\beta}$-reduction is required to re-normalize the let constructs in ANF in order to produce the proper normal form. Page 2 of this paper provides an example of this. Instead of using substitution to traverse the CPS term and replace it with the continuation, ANF and monadic languages utilize what they define as commuting conversions such as let y ${\Leftarrow}$ (let x  ${\Leftarrow}$  M in N) in P ${\rightarrow}$  let x ${\Leftarrow}$  M in (let y  ${\Leftarrow}$  N in P). Kennedy points out that the necessity of commuting conversions increases the complexity to  $O({n^2})$,  while applying reductions on CPS terms produces a complexity of $O({n})$. Any pragmatic programming language includes conditional expressions, which also  requires commuting conversions or a “join-point” because in ANF, a conditional expression is not reduced to normal form due to a let-bound conditional expression. Thus, the true correspondence of ANF and monadic languages to CPS reductions comes at a cost of an increased complexity. Although the drawbacks of ANF clearly demonstrate issues pertaining to complexity, CPS has efficiency issues of its own. CPS introduces administrative redexes, while ANF has successfully avoided them. “The cost of allocating closures for lambdas introduced by the CPS transformation” is another complexity concern introduced by CPS. Despite this, Kennedy argues that “the complexity of CPS terms is really a benefit, assigning useful names to all intermediate computations and control points,” and I must concur.  Seeing that both ANF and CPS have separate pros and cons pertaining to complexity, the explicit control flow demonstrated by CPS terms definitely has syntactic advantages over ANF. I believe that it’s merely a personal choice at this point, but I find the simplified ${\beta}$-reductions of CPS to be much more elegant than the direct style of ANF. P.S. I installed the LaTeX plugin, so now I can tediously use symbols whenever I feel fit! ## 3 comments on “CPS vs. ANF” 1. Glad to see I’m not the only one who really enjoyed this paper! I think the past efficiency concerns of CPS transformations which Kennedy referenced do not apply to his use of second-class continuations, since they are just lexically-scoped basic blocks. 1. Heidy says: Ah, I never considered his second-class continuations. That indeed is a good point. I suppose I’m mainly referencing to the attacks on CPS by Flanagan and Sabry. 2. Heck yeah this is exctaly what I needed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 11, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7062486410140991, "perplexity": 2743.226126280924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863489.85/warc/CC-MAIN-20180620065936-20180620085936-00249.warc.gz"}
https://leetcode.com/articles/most-profit-assigning-work/
#### Approach #1: Sorting Events [Accepted] Intuition We can consider the workers in any order, so let's process them in order of skill. If we processed all jobs with lower skill first, then the profit is just the most profitable job we have seen so far. Algorithm We can use a "two pointer" approach to process jobs in order. We will keep track of best, the maximum profit seen. For each worker with a certain skill, after processing all jobs with lower or equal difficulty, we add best to our answer. Complexity Analysis • Time Complexity: , where is the number of jobs, and is the number of people. • Space Complexity: , the additional space used by jobs. Analysis written by: @awice.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6709757447242737, "perplexity": 2086.6047101228323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323246.35/warc/CC-MAIN-20190825084751-20190825110751-00122.warc.gz"}
https://timganmath.edu.sg/a-level-free-h2-math-notes-and-resources/sigma-notation/
# Sigma Notation Sigma Notation is a practice to express the sum of a lengthy series in a simple and concise way. The notation used for this is $\Sigma$ and it is always followed by the variable we are summing over. This symbol tells us to add up everything that follows it. The numbers we add together are called the terms of the series. ##### 2017 EJC Promo Q7 (b) [Modified] Given that $\sum\limits_{r=2}^{n}{\frac{1}{{{r}^{2}}-1}}=\frac{3}{4}-\frac{1}{2n}-\frac{1}{2\left( n+1 \right)}$, (ii) state $\sum\limits_{r=2}^{\infty }{\frac{1}{{{r}^{2}}-1}}$, (ii) state $\sum\limits_{r=2}^{\infty }{\frac{1}{{{r}^{2}}-1}}$, (iii) find $\sum\limits_{r=2}^{n-1}{\frac{1}{r(r+2)}}$. (iii) find $\sum\limits_{r=2}^{n-1}{\frac{1}{r(r+2)}}$. ##### 2018 YJC Promo Q13 The terms of a geometric progression ${{u}_{1}},{{u}_{2}},{{u}_{3}},{{u}_{4}},…$ are such that the sum to infinity is $81$ and the sum of the first $4$ terms is $80$. If ${{u}_{1}}>100\,\,$and $\,\text{n}\ge \text{3}$, (i) Show that $\frac{3}{r}-\frac{6}{r+1}+\frac{3}{r+2}=\frac{6}{r\left( r+1 \right)\left( r+2 \right)}$. [1] (i) Show that $\frac{3}{r}-\frac{6}{r+1}+\frac{3}{r+2}=\frac{6}{r\left( r+1 \right)\left( r+2 \right)}$. [1] (ii) Hence show that $\sum\limits_{r=1}^{N}{\frac{1}{r\left( r+1 \right)\left( r+2 \right)}}=\frac{1}{4}-\frac{1}{2\left( N+1 \right)}+\frac{1}{2\left( N+2 \right)}$. [3] (ii) Hence show that $\sum\limits_{r=1}^{N}{\frac{1}{r\left( r+1 \right)\left( r+2 \right)}}=\frac{1}{4}-\frac{1}{2\left( N+1 \right)}+\frac{1}{2\left( N+2 \right)}$. [3] (iii) Give a reason why the series $\sum\limits_{r=1}^{\infty }{\frac{1}{r\left( r+1 \right)\left( r+2 \right)}}$ converges and write down its value. [2] (iii) Give a reason why the series $\sum\limits_{r=1}^{\infty }{\frac{1}{r\left( r+1 \right)\left( r+2 \right)}}$ converges and write down its value. [2] (iv) Use your answer to (ii) to find $\sum\limits_{r=3}^{N}{\frac{1}{r\left( r-1 \right)\left( r-2 \right)}}$in terms of $N$. [2] (iv) Use your answer to (ii) to find $\sum\limits_{r=3}^{N}{\frac{1}{r\left( r-1 \right)\left( r-2 \right)}}$in terms of $N$. [2] ##### 2019 DHS Promo Q2 Using the result $\sum\limits_{r=1}^{n}{\frac{r}{{{2}^{r}}}}=2-\frac{n+2}{{{2}^{n}}}$, show that $\sum\limits_{r=1}^{n}{\left( r-n \right)\left( {{2}^{-r}}+1 \right)}$ can be expressed in the form $C\left( 1-\frac{1}{{{2}^{n}}} \right)+Dn\left( n+1 \right)$, where $C$ and $D$ are constants to be determined. [4] ##### 2017 VJC P1 Q8 It is given that $\sum\limits_{r=1}^{n}{\frac{{{r}^{2}}}{{{3}^{r}}}}=\frac{3}{2}-\frac{{{n}^{2}}+3n+3}{2\left( {{3}^{n}} \right)}$ . (i) Find $\sum\limits_{r=1}^{\infty }{\frac{{{r}^{2}}+{{\left( -1 \right)}^{r}}}{{{3}^{r}}}}$. [3] (i) Find $\sum\limits_{r=1}^{\infty }{\frac{{{r}^{2}}+{{\left( -1 \right)}^{r}}}{{{3}^{r}}}}$. [3] (ii) Show that $\sum\limits_{r=4}^{n}{\frac{{{\left( r-2 \right)}^{2}}}{{{3}^{r-2}}}}=\frac{p}{q}-\frac{a{{n}^{2}}-an+a}{2\left( {{3}^{n-2}} \right)}$, where $a$, $p$ and $q$ are integers to be determined. [5] (ii) Show that $\sum\limits_{r=4}^{n}{\frac{{{\left( r-2 \right)}^{2}}}{{{3}^{r-2}}}}=\frac{p}{q}-\frac{a{{n}^{2}}-an+a}{2\left( {{3}^{n-2}} \right)}$, where $a$, $p$ and $q$ are integers to be determined. [5] ##### 2020 EJC P1 Q10 The diagram shows the graph of $y=\frac{1}{{{x}^{2}}+1}$ when $x>0$. (i) Evaluate $\int_{k}^{k+1}{\frac{1}{{{x}^{2}}+1}\text{d}x}$ for $k>0$, leaving your answer in terms of $k$. [2] (i) Evaluate $\int_{k}^{k+1}{\frac{1}{{{x}^{2}}+1}\text{d}x}$ for $k>0$, leaving your answer in terms of $k$. [2] (ii) By considering appropriate rectangles on the interval $\left[ k,k+1 \right]$for the curve $y=\frac{1}{{{x}^{2}}+1}$, show that $\frac{1}{{{\left( k+1 \right)}^{2}}+1}<{{\tan }^{-1}}\left( k+1 \right)-{{\tan }^{-1}}k<\frac{1}{{{k}^{2}}+1}$ for $k\in {{\mathbb{Z}}^{+}}$. [2] (ii) By considering appropriate rectangles on the interval $\left[ k,k+1 \right]$for the curve $y=\frac{1}{{{x}^{2}}+1}$, show that $\frac{1}{{{\left( k+1 \right)}^{2}}+1}<{{\tan }^{-1}}\left( k+1 \right)-{{\tan }^{-1}}k<\frac{1}{{{k}^{2}}+1}$ for $k\in {{\mathbb{Z}}^{+}}$. [2] (iii) Use the identity $\tan \left( A-B \right)=\frac{\tan A-\tan B}{1+\tan A\tan B}$ to show that ${{\tan }^{-1}}x-{{\tan }^{-1}}y={{\tan }^{-1}}\frac{x-y}{1+xy}$, where $x>y>0$. [2] (iii) Use the identity $\tan \left( A-B \right)=\frac{\tan A-\tan B}{1+\tan A\tan B}$ to show that ${{\tan }^{-1}}x-{{\tan }^{-1}}y={{\tan }^{-1}}\frac{x-y}{1+xy}$, where $x>y>0$. [2] (iv) By considering parts (ii) and (iii), prove by the method of differences that $\sum\limits_{k=1}^{n}{\frac{1}{{{\left( k+1 \right)}^{2}}+1}}<{{\tan }^{-1}}\left( \frac{n}{n+2} \right)<\sum\limits_{k=1}^{n}{\frac{1}{{{k}^{2}}+1}}$ [4] (iv) By considering parts (ii) and (iii), prove by the method of differences that $\sum\limits_{k=1}^{n}{\frac{1}{{{\left( k+1 \right)}^{2}}+1}}<{{\tan }^{-1}}\left( \frac{n}{n+2} \right)<\sum\limits_{k=1}^{n}{\frac{1}{{{k}^{2}}+1}}$ [4] ##### Suggested Handwritten and Video Solutions Play Video H2 Math Question Bank Check out our question bank, where our students have access to thousands of H2 Math questions with video and handwritten solutions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9137282967567444, "perplexity": 766.0250083683253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711126.30/warc/CC-MAIN-20221207021130-20221207051130-00212.warc.gz"}
https://www.numerade.com/questions/reactants-d-and-e-form-product-f-draw-a-road-map-and-write-a-plan-to-find-the-mass-g-of-f-when-27-g-/
🎉 The Study-to-Win Winning Ticket number has been announced! Go to your Tickets dashboard to see if you won! 🎉View Winning Ticket ### Chlorine gas can be made in the laboratory by the… 02:10 University of Maine Problem 67 # Reactants D and E form product F. Draw a road map and write a Plan to find the mass (g) of F when 27 g of D reacts with 31 g of E. ## Discussion You must be signed in to discuss. ## Video Transcript in a chemical reaction. When you're given quantities of reactant, you confined the amount of product using strike you metric Lee equivalent ratios. It's helpful to draw a visual diagram or a road map to identify the different conversions needed along the way. Okay, in a problem where you're given quantities of both reactant. So, for example, in this reaction where D plus E reacts to form F, you're given 27 grams of D and 31 grams of E. One of these reactant is what's called the limiting re agent or the one that makes the amount of product. The other is in excess, which means that some will remain after reacting. So to figure out which one is the limiting re agent, we need to use both to calculate how much product would form. So we start off with their grams of each substance and change each one of those two moles, and we do this by dividing by the molar mass. Once we have our quantities in moles, we can find out how much product will form or how many moles of F. This information is found using the mole ratio from the balanced equation of these 21 will be smaller, so one will produce less product using the smaller amount. We can change it to grams and we do that by multiplying by the molar mass of a.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8468704223632812, "perplexity": 753.8745064547467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141717601.66/warc/CC-MAIN-20201203000447-20201203030447-00474.warc.gz"}
http://www.bioinformaticszen.com/post/makefiles/
# Bioinformatics Zen ## Makefiles // Tue July 10 2012 This is the second post in a series discussing creating simple, reproducible and organised research workflows. In this post I will describe creating reproducible workflows using GNU Make to manage the dependencies between project steps. A bioinformatics research project should be easily reproducible. Anyone should be able to repeat all the steps from scratch. Furthermore the simpler it is to repeat all the project's analyses from the beginning the better. Reproducibility should also include showing each intermediate step and the processes used to generate the results. Making a project reproducible makes it simpler for you to return to a project after several months and repeat your work. Reproducible research also allows you to share your work with others whom would like to build upon it. Build files can be used to manage dependencies between steps in a workflow. Using a build file in your project therefore goes a long way towards facilitating reproducible research. GNU Make provides a syntax to describe project dependency steps as targets in a computational workflow. Each target represents a file that should be created from a described set of commands. An example Makefile looks like this: two.txt: one.txt # Command line steps to create 'two.txt' using 'one.txt' one.txt: # Command line steps to create 'one.txt' # Multiple lines of shell commands can be used. Each target is described on the unindented lines. The name of the created file comes before the colon ':' and the dependencies needed to create this file come after. The shell commands to used generate the target file are on the indented lines following the target. The Makefile example above defines two tasks, each generating a file. In this example these are the files one.txt and two.txt. In this workflow the generation of the file two.txt is dependent on the generation of one.txt. Calling make at the command line will evaluate the first target at the top of the file and then evaluate any dependencies required for this target. The anatomy of a Makefile target is therefore a line containing the name of the file to create followed by the dependencies required to do this. Shell commands to generate the target file are then on the subsequent indented lines. Workflows are created by chaining these targets together: the output file from one target becomes the dependency for another. I have started using GNU Make in my computational projects because this syntax allows for very light-weight specification of a workflow. The dependency system ensures that each target file is regenerated if any earlier steps change. This therefore make computational projects very reproducible. I have briefly described how GNU Make works, and in the sections below I will describe the advantages of using GNU Make. ## Build workflows independent of any language When I wrote my original post on 'organised bioinformatics experiments' I programmed mainly in Ruby. Therefore using the Ruby-based Rake as a build tool made sense as I could write in the programming language I was most familiar with. In the last two years I've been experimenting with other languages such as Erlang and Clojure. I feel that in future I will continue to try new languages and include them in my research. Therefore the advantage of using Make over Rake is that every workflow step is simply a call to the shell rather than being tied to a specific programming language. This therefore allows me to write and call scripts written in any language. In my my current process I package up my analysis code into scripts and call these from the Makefile. Separating each analysis step into a single file makes it much easier to switch out one language for another as long as they both return the same output format. Often I end up replacing scripts with simple shell-based combinations of pipes and coreutils. Using in-built shell commands can often be simpler and faster than writing your own script. Finally separating the analysis targets into individual files further allows an important aspect for creating more reproducible computational workflows which I'll outline in the next section. ## Generative code as a target dependency Any file can be a dependency to a Make target, including the script doing the data processing. For example, consider the example Makefile target: output.txt: bin/analyse input.txt $^ >$@ The target is a file called output.txt which requires two dependencies: bin/analyse and input.txt. The target code takes advantage of Makefile macros, which simplify writing targets. In this case $^ expands to the list of dependencies and $@ expands to the target file. This is therefore similar to writing: output.txt: bin/analyse input.txt bin/analyse input.txt > output.txt The advantage of making bin/analyse as a target dependency is that whenever this file is edited this step and any downstream steps will be regenerated when I call make again. This is because bin/analyse will have a later time stamp that than the target file depending on it. This clarifies a critical part of making workflow reproducible: it is not only when input data changes that a workflow needs to be rerun, but also any scripts that process or generate data. Declaring the data processing scripts as dependencies enforces this requirement. ## Abstraction of analysis steps Consider this example Makefile with three targets: all: prot001.txt prot0002.txt prot003.txt %.txt: ./bin/intensive_operation %.fasta $^ >$@ %.fasta: curl database.example.com/$* >$@ Here the target all depends on the files: prot00[1-3].txt. This target however defines no code to create them. Furthermore in the following lines there are no tasks to generate any of these files individually, instead there are tasks to generate files based on two file type extensions: .txt or .fasta. When the Makefile is evaluated these wild card operators are expanded to create targets for files that match the extensions. For instance the following steps would be executed to generate prot001.txt. prot001.txt: ./bin/intensive_operation prot001.fasta ./bin/intensive_operation prot001.fasta > prot001.txt prot001.fasta: curl database.example.com/prot001 > prot001.fasta This wild card % target specification allows the process to be abstracted out independently of individual files. Generalised targets are instead created for files based on their file extension. This makes it much simpler to change the data analysed whilst still maintaining the same workflow. Concretely, this means you can specify additional files to be generated in the all task without changing the rest of the workflow. Idiomatically, I would use a variable named objects to specify the output I am interested in. objects = prot001.txt prot002.txt prot003.txt all: $(objects) %.txt: ./bin/intensive_operation %.fasta$^ > $@ %.fasta: curl http://database.example.com/$* > \$@ You could go even further than this however and move the list of target files to a separate file and read their names in as a dependency. This would mean you would only change the Makefile when making changes to your workflow, thereby isolating changes between the workflow process and the workflow inputs to different files in the revision control history. ## Very simple parallelisation Bigger data sets require more time to process and taking advantage of multi-core processors is a cheap way to reduce this. Given the Makefile I outlined in the previous section I can invoke make as follows. make -j 4 As I defined my targets above by abstracting out the process based on file extension each target step can be run independently. Therefore Make can use a separate process to generate each of the required files. If the number of files required to be generated exceeds the number of processes specified by the -j flag then these will begin after previous targets have finished. This is a very cheap method to add multi-core parallelisation to a workflow, as long as you can abstract out the workflow processes to fit this. ## Summary GNU Make provides a very elegant functional language, where targets can be thought of as functions mapping the data from one input file into an output file. The Make syntax is however still very simple, and adding or updating tasks is straight-forward. Furthermore it is simple to include your own scripts as dependencies in a workflow or drop down to fast and powerful combinations of coreutil functions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5430551171302795, "perplexity": 1789.4991208163435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588216.48/warc/CC-MAIN-20211027150823-20211027180823-00559.warc.gz"}
http://mathhelpforum.com/calculus/173180-question-about-solid-revolution-disc-method-print.html
# Question about Solid of Revolution (Disc Method) • March 2nd 2011, 07:49 AM Lemonie Question about Solid of Revolution (Disc Method) So, I know the formula for the disc method is: $v=pi\int\limits_{b}^{a}[R(x)]^2dx$ ...what I'm so confused about is how to fix up $[R(x)]$ when the axis of revolution isn't the x-axis or the y-axis. For example: http://i52.tinypic.com/nqhp5.jpg What is the equation to be used when using the disc method to get: a.) shaded region when revolved to y=8 b.) non-shaded region when revolved to y=8 Thank you in advance! • March 2nd 2011, 08:09 AM HallsofIvy A disk will have radius extending from the graph to the line of rotation. If the line of rotation is y= 8, then the radius is R(x)= |y- 8|= |f(x)- 8|. For B, the non-shaded area, since it goes up to the line y= 8, the volume would be $\pi\int_{x=0)^{x_1} (f(x)- 8)^2 dx$ where the upper limit, $x_1$ is the value of x where the graph of y= f(x) crosses the line y= 8. That is, $f(x_1)= 8$. For (a) where there is unshaded area between the shaded area and the line of rotation, you cannot use the "disk method" directly. What you can do is use the "washer" method. A washer has an outer radius of $r_2$ an inner radius of $r_2$ and so its area is the area between those two radii, $\pi r_2^2- \pi r_1^2= \pi(r_2^2- r_1^2)$. That is really the same as doing two separate "disk" calculations, to find an "outer volume" and "inner volume" and subtracting: $\pi\int_a^b R_2(y)^2dy- \pi\int_a^b R_1(y)^2dy$ For a problem like the one you show, where one edge is just y= 0, that outer volume is just the volume of a cylinder- $\pi R^2 h$ which here would be $64\pi h$ and, again, h is the value of x where the graph crosses the line of rotation, f(x)= 8.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8578284382820129, "perplexity": 619.5376783002959}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446500.34/warc/CC-MAIN-20151124205406-00178-ip-10-71-132-137.ec2.internal.warc.gz"}
https://stacks.math.columbia.edu/tag/0978
Lemma 61.5.6. Let $A$ be a ring such that $\mathop{\mathrm{Spec}}(A)$ is profinite. Let $A \to B$ be a ring map. Then $\mathop{\mathrm{Spec}}(B)$ is profinite in each of the following cases: 1. if $\mathfrak q,\mathfrak q' \subset B$ lie over the same prime of $A$, then neither $\mathfrak q \subset \mathfrak q'$, nor $\mathfrak q' \subset \mathfrak q$, 2. $A \to B$ induces algebraic extensions of residue fields, 3. $A \to B$ is a local isomorphism, 4. $A \to B$ identifies local rings, 5. $A \to B$ is weakly étale, 6. $A \to B$ is quasi-finite, 7. $A \to B$ is unramified, 8. $A \to B$ is étale, 9. $B$ is a filtered colimit of $A$-algebras as in (1) – (8), 10. etc. Proof. By the references mentioned above (Algebra, Lemma 10.26.5 or Topology, Lemma 5.23.8) there are no specializations between distinct points of $\mathop{\mathrm{Spec}}(A)$ and $\mathop{\mathrm{Spec}}(B)$ is profinite if and only if there are no specializations between distinct points of $\mathop{\mathrm{Spec}}(B)$. These specializations can only happen in the fibres of $\mathop{\mathrm{Spec}}(B) \to \mathop{\mathrm{Spec}}(A)$. In this way we see that (1) is true. The assumption in (2) implies all primes of $B$ are maximal by Algebra, Lemma 10.35.9. Thus (2) holds. If $A \to B$ is a local isomorphism or identifies local rings, then the residue field extensions are trivial, so (3) and (4) follow from (2). If $A \to B$ is weakly étale, then More on Algebra, Lemma 15.104.17 tells us it induces separable algebraic residue field extensions, so (5) follows from (2). If $A \to B$ is quasi-finite, then the fibres are finite discrete topological spaces. Hence (6) follows from (1). Hence (3) follows from (1). Cases (7) and (8) follow from this as unramified and étale ring map are quasi-finite (Algebra, Lemmas 10.151.6 and 10.143.6). If $B = \mathop{\mathrm{colim}}\nolimits B_ i$ is a filtered colimit of $A$-algebras, then $\mathop{\mathrm{Spec}}(B) = \mathop{\mathrm{lim}}\nolimits \mathop{\mathrm{Spec}}(B_ i)$ in the category of topological spaces by Limits, Lemma 32.4.2. Hence if each $\mathop{\mathrm{Spec}}(B_ i)$ is profinite, so is $\mathop{\mathrm{Spec}}(B)$ by Topology, Lemma 5.22.3. This proves (9). $\square$ ## Post a comment Your email address will not be published. Required fields are marked. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). Unfortunately JavaScript is disabled in your browser, so the comment preview function will not work. All contributions are licensed under the GNU Free Documentation License. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0978. Beware of the difference between the letter 'O' and the digit '0'.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9726809859275818, "perplexity": 501.4202058202005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711069.79/warc/CC-MAIN-20221206024911-20221206054911-00241.warc.gz"}
https://cn.maplesoft.com/support/help/view.aspx?path=GraphTheory%2FGeometricGraphs%2FGabrielGraph
GabrielGraph - Maple Help GraphTheory[GeometricGraphs] GabrielGraph construct Gabriel graph Calling Sequence GabrielGraph( P, opts ) Parameters P - Matrix or list of lists representing set of points opts - (optional) one or more options as specified below Options • triangulation : list  of three-element lists Supply a previously computed Delaunay triangulation of P. The input must be a valid Delaunay triangulation in the format returned by ComputationalGeometry[DelaunayTriangulation]: a list of three-element lists of integers, representing triangles in a triangulation of P. • vertices : list of integers, strings or symbols Specifies the vertices to be used in the generated graph. • weighted : true or false If weighted=true, the result is a weighted graph whose edge weights correspond to the Euclidean distance between points. The default is false. Description • The GabrielGraph(P, opts) command returns the Gabriel graph for the point set P. • The parameter P must be a Matrix or list of lists representing a set of points. Definitions • Let $P$ be a set of points in $n$ dimensions, let $p$ and $q$ be arbitrary points from $P$, and let $\mathrm{dist}\left(p,q\right)$ be the Euclidean distance between $p$ and $q$. • The Gabriel graph the graph whose vertices correspond to points in $P$ and whose edges consist of those pairs $p$ and $q$ from $P$ for which the closed ball centered halfway between $p$ and $q$ with diameter equal to $\mathrm{dist}\left(p-q\right)$ contains no other points from $P$. Formally, define the ball $B\left(p,q\right)$ to be those points $r\in P$ such that $\mathrm{dist}\left(r,\frac{p}{2}+\frac{q}{2}\right)\le \frac{\mathrm{dist}\left(p,q\right)}{2}$. The Gabriel graph has an edge between $p$ and $q$ if and only if $B\left(p,q\right)=\varnothing$. • The Gabriel graph has the following relationships with other graphs: The Euclidean minimum spanning tree on P is a subgraph of the Gabriel graph on P. The nearest neighbor graph on P is a subgraph of the Gabriel graph on P. The relative neighborhood graph on P is a subgraph of the Gabriel graph on P. The Urquhart graph on P is a subgraph of the Gabriel graph on P. The Gabriel graph on P is a subgraph of the Delaunay graph on P. Examples Generate a set of random two-dimensional points and draw a Gabriel graph. > $\mathrm{with}\left(\mathrm{GraphTheory}\right):$ > $\mathrm{with}\left(\mathrm{GeometricGraphs}\right):$ > $\mathrm{points}≔\mathrm{LinearAlgebra}:-\mathrm{RandomMatrix}\left(60,2,\mathrm{generator}=0..100.,\mathrm{datatype}=\mathrm{float}\left[8\right]\right)$ ${\mathrm{points}}{≔}\begin{array}{c}\left[\begin{array}{cc}{9.85017697341803}& {82.9750304386195}\\ {86.0670183749663}& {83.3188659363996}\\ {64.3746795546741}& {73.8671607639673}\\ {57.3670557294666}& {2.34399775883031}\\ {23.6234264844933}& {52.6873367387328}\\ {47.0027547350003}& {22.2459488367552}\\ {74.9213491558963}& {62.0471820220718}\\ {92.1513434709073}& {96.3107262637080}\\ {48.2319624355944}& {63.7563267144141}\\ {90.9441877431805}& {33.8527464913022}\\ {⋮}& {⋮}\end{array}\right]\\ \hfill {\text{60 × 2 Matrix}}\end{array}$ (1) > $\mathrm{GG}≔\mathrm{GabrielGraph}\left(\mathrm{points}\right)$ ${\mathrm{GG}}{≔}{\mathrm{Graph 1: an undirected unweighted graph with 60 vertices and 105 edge\left(s\right)}}$ (2) > $\mathrm{DrawGraph}\left(\mathrm{GG}\right)$ References Gabriel, Kuno Ruben; Sokal, Robert Reuven (1969), "A new statistical approach to geographic variation analysis", Systematic Biology, Society of Systematic Biologists, 18(3): 259–278. doi:10.2307/2412323. Compatibility • The GraphTheory[GeometricGraphs][GabrielGraph] command was introduced in Maple 2020. • For more information on Maple 2020 changes, see Updates in Maple 2020.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 29, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9405505061149597, "perplexity": 1191.873823912669}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948708.2/warc/CC-MAIN-20230327220742-20230328010742-00447.warc.gz"}
http://mathhelpforum.com/math-topics/194897-road-allignment.html
# Math Help - Road Allignment Hi guys Having a bit of trouble with the below with the wording possibly rather than the mathematical end. But anyway would appreciate it if anyone could work through it. A single carriageway road changes direction by 25º by means of a circular curve without transitions. The distance from the intersection point to the start of the circular curve is 170m, and the superelevation on the circular curve is 2.5%. Calculate the value of μ for a vehicle travelling at 80 km/h at the middle of the curve. Basically what needs to be found here is R the radius as it then goes into the formula v2/127R = e + μ. Obviously its just attaining R thats tricky Originally Posted by question Hi guys Having a bit of trouble with the below with the wording possibly rather than the mathematical end. But anyway would appreciate it if anyone could work through it. A single carriageway road changes direction by 25º by means of a circular curve without transitions. The distance from the intersection point to the start of the circular curve is 170m, and the superelevation on the circular curve is 2.5%. Calculate the value of μ for a vehicle travelling at 80 km/h at the middle of the curve. Basically what needs to be found here is R the radius as it then goes into the formula v2/127R = e + μ. Obviously its just attaining R thats tricky 1. Draw a sketch! (see attachment) 2. You are dealing with 2 different right triangles. Use the 1st one to determine the necessary interior angles. (The midpoint of the curve is located on the angle bisector of the angle of 155°) 3. Use the indicated grey right triangle to calculate the length of R: $\frac{85\ m}{R}=\sin(12.5^\circ)$ Solve for R. Attached Thumbnails Originally Posted by earboth 1. Draw a sketch! (see attachment) 2. You are dealing with 2 different right triangles. Use the 1st one to determine the necessary interior angles. (The midpoint of the curve is located on the angle bisector of the angle of 155°) 3. Use the indicated grey right triangle to calculate the length of R: $\frac{85\ m}{R}=\sin(12.5^\circ)$ Solve for R. Was just about to respond regarding the 72.5 and 77.5. I had gotten to that point myself with the diagram and im sure you noticed solving for R gives a negative value of -1281.6m which is a bit dubious. The answer that should be obtained is 767m. Interpretation of the intersection point is crucial and i tried the 170m length as the distance from the start of the curve to where the line changes direction. This did not provide the correct answer either. I thinnk the wording of the question is deliberately troublesome. Really appreciate you taking the time to reply. Thanks Originally Posted by question Was just about to respond regarding the 72.5 and 77.5. I had gotten to that point myself with the diagram and im sure you noticed solving for R gives a negative value of -1281.6m which is a bit dubious. The answer that should be obtained is 767m. <--- Your calculator is still in radians mode. You have to switch it into degree mode. Then the result is 392.7 m. Interpretation of the intersection point is crucial and i tried the 170m length as the distance from the start of the curve to where the line changes direction. This did not provide the correct answer either. I thinnk the wording of the question is deliberately troublesome. Really appreciate you taking the time to reply. Thanks I obviously misundertood the question. If you use the grey right triangle you know that $\frac R{170\ m} = \tan(77.5^\circ)$ Solve for R. You'll get $R \approx 766.8 \ m$ Attached Thumbnails
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8967086672782898, "perplexity": 677.2919411411663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430450367739.91/warc/CC-MAIN-20150501031927-00099-ip-10-235-10-82.ec2.internal.warc.gz"}
http://mathhelpforum.com/pre-calculus/162274-nth-root-complex-number.html
# Math Help - nth root of a complex number 1. ## nth root of a complex number Ok so basically I'm supposed to understand complex numbers (so that we can use them for eigenvalues and eigenvectors), but then someone dropped the ball and we got almost no teaching on the subject and now pretty much everyone in my class who hasn't worked with complex numbers before (which includes me) are having trouble understanding. Ok, so I'm doing an example question and I just can't seem to get it to work: $z^4 = -64$ and I have to find z. Immediately my response was to find the fourth root of 64 ( $2\sqrt{2}$) and multiply it by some form of $i$ whose fourth power would be equal to negative 1. This turned out to be $\sqrt{i}$ and I'm sure that there's various terms that could be made positive or negative (or some duplicate terms) yielding four roots. Great. I still don't understand it, that was essentially guess and check. So my teacher did show us a method for finding the nth root of a complex number (I'll cut out most of the algebraic steps as I'm sure most of the people who can help me will be well aware of this method): So if you have $z^n = w$, where w is a complex number, you convert w into polar form and then set $z = te^{(i\phi)}$ so that you get $(te^(i\phi))^n = re^{(i\theta)}$ After some algebraic manipulation, you get $z = \sqrt[n]{r} e^{(i(\frac{\theta + m2\pi}{n}))}$ where m is an integer. So I tried that with $z^4 = -64 = -64 + 0i$. To convert, for the modulus I got $r = \sqrt{(-64)^2 + (0i)^2} = 64$ and for the argument I got $\theta = arctan(\frac{0}{-64}) = 0$. So... $z = \sqrt[4]{64} e^{(i(\frac{0 + m2\pi}{4}))} = 2\sqrt{2} e^{(i(\frac{m\pi}{2}))}$ and when you change that back into algebraic form (the form the answer is required in) you get(for m=0) $z = 2\sqrt{2}(\cos0 + i\sin0) = 2\sqrt{2}$ which is clearly not right. What am I doing wrong? The problem I see is that when I convert a pure real number into a complex number in polar form, the polar form is the same regardless of whether the number is positive or negative. Any help with solving problems of this type is greatly appreciated. 2. $- 64 = 64\exp \left( {i\pi } \right)$ Let $\sigma = \sqrt[n]{{64}}\exp \left( {\frac{{i\pi }}{n}} \right)\;\& \,\xi = \exp \left( {\frac{{2i\pi }}{n}} \right)$. The roots are $\sigma\cdot \xi^k,~k=0,1,\cdots,n-1$. 3. How did you get pi as the argument? We were taught that if you have a complex number z, then $z = x + yi = r \exp(i\theta)$ where $r = \sqrt{x^2 + y^2}$ and $\theta = \arctan \frac{y}{x}$ So in this case wouldn't $\theta = \arctan \frac{0}{-64} = \arctan 0 = 0$ ? 4. For any negative real number, $\pi$ is the argument. 5. Interesting. Why exactly is this? Does it have something to do with the quadrant that $\arctan \frac{0}{x}$ ends up being in if x is negative? 6. Originally Posted by TheGreenLaser Interesting. Why exactly is this? Does it have something to do with the quadrant that $\arctan \frac{0}{x}$ ends up being in if x is negative? Here is the rule for arguments. $Arg(z) = \left\{ {\begin{array}{*{20}c} {\arctan \left( {\frac{y} {x}} \right),} & {x > 0} \\ {\arctan \left( {\frac{y} {x}} \right) + \pi ,} & {x < 0\;\& \,y \geqslant 0} \\ {\arctan \left( {\frac{y} {x}} \right) - \pi ,} & {x < 0\;\& \,y < 0} \\ \end{array} } \right.$ . In the case $x=0$ then it is $\text{sgn}(y)\frac{\pi}{2}$ 7. Not exactly the quadrant because $arctan \frac{0}{x}$ is not in any quadrant- it is on the boundary between two quadrants. But you are correct that $\tan(\theta)$ goes from $-\infty$ to $\infty$ for $-\pi/2\le \theta\le \pi/2$- and then repeats for $\pi/2\le\theta\le 3\pi/2$. And that means that tangent does not have a "true" inverse for $0\le\theta\le 2\pi$. The "principal value" is always between $-\pi/2$ and $\pi/2$. To find values between 0 and $2\pi$ we have to look at the x and y components separately. (When you divide y/x you lose information about the signs of y and x separately.) I think it is better to look at it purely geometrically. The complex numbers can be thought of as points in the "complex" plane. The complex number a+ bi corresponds to the point with coordinates (a, b). That is, the "x" axis is the "real axis" and the "y" axis is the imaginary axis. The "argument" of the complex number a+ bi is the angle the line from (a, b) to the origin makes with the positive x-axis. For any positive real number, that angle is 0. For any imaginary number, ai, with a positive, that angle is $\pi/2$. For any negative real number, that angle is $\pi$ and for any imaginary number, ai, with a negative, that angle is $3\pi/2$. Yet another way of looking at it: $a+ bi= r e^{i\theta}= r(cos(\theta)+ i sin(\theta))$ with r always non-negative. In particular, if $\theta= 0$, $a+ bi= r e^{i(0)}= r(cos(0)+ i sin(0))= r(1+ i0)= r$ which is non-negative. If $\theta= \pi$, $a+ bi= r e^{i\pi}= r(cos(\pi)+ i sin(\pi))= r(-1+ i(0))= -r$ which is negative. Also, if $\theta= \pi/2$, $a+ bi= r (cos(\pi/2}+ isin(\pi/2))= r(0+ i)= ri$, a "positive" imaginary number while if $\theta= 3\pi/2$ (or $-\pi/2$, then $a+ bi= r(cos(3\pi/2)+ i sin(3\pi/2))= r(cos(-\pi/2)+i sin(-\pi/2))= r(0- i)= -ri$, a "negative" imaginary number. 8. Thanks to both of you for the help. More and more I'm realizing that I should have done a better job of understanding trigonometry in highschool. It seems to just be popping up everywhere. I got high 90's in math, but I never really spent time to truly understand trig. I mean I understood it better than most people, but for the most part I just got really good at following the rules, the consequence being that I very quickly forgot after graduation. 9. Originally Posted by TheGreenLaser How did you get pi as the argument? We were taught that if you have a complex number z, then $z = x + yi = r \exp(i\theta)$ where $r = \sqrt{x^2 + y^2}$ and $\theta = \arctan \frac{y}{x}$ So in this case wouldn't $\theta = \arctan \frac{0}{-64} = \arctan 0 = 0$ ? Do you understand the geometric meaning of argument? Plot z on an argand diagram. The argument is now obvious. 10. Originally Posted by mr fantastic Do you understand the geometric meaning of argument? Plot z on an argand diagram. The argument is now obvious. No, I can't say I do fully understand the geometric meaning of an argument. Do you mind explaining? 11. Originally Posted by TheGreenLaser No, I can't say I do fully understand the geometric meaning of an argument. Do you mind explaining? It will be explained in your class notes or textbook. Google will find numerous websites that explain. You cannot understand the polar form of a complex number without understanding this concept (in my opinion, anyway). 12. Alright, thanks.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 53, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9295074939727783, "perplexity": 242.1980070476655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500836106.97/warc/CC-MAIN-20140820021356-00398-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.tug.org/pipermail/texhax/2009-November/013738.html
# [texhax] hyperref Barbara Beeton bnb at ams.org Wed Nov 11 16:00:53 CET 2009 I am mikex user and using hyperref package, with this package, if any two equations are coming continuously it is taking more space than the usual e.g. $$A+B$$ $$A+B$$ In between two equations it is taking more space, if i use the hyperref package. this is unrelated to hyperref. if you have one equation (or any display math structure) directly following another, you will always get more space. the amsmath package defines multi-line display structures that will avoid this extra space. -- bb
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9792585372924805, "perplexity": 11864.732006068973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572898.29/warc/CC-MAIN-20220817092402-20220817122402-00559.warc.gz"}
http://afp.sourceforge.net/entries/Max-Card-Matching.shtml
project summary # Maximum Cardinality Matching Title: Maximum Cardinality Matching Author: Christine Rizkallah Submission date: 2011-07-21 Abstract: A matching in a graph G is a subset M of the edges of G such that no two share an endpoint. A matching has maximum cardinality if its cardinality is at least as large as that of any other matching. An odd-set cover OSC of a graph G is a labeling of the nodes of G with integers such that every edge of G is either incident to a node labeled 1 or connects two nodes labeled with the same number i ≥ 2. This article proves Edmonds theorem: Let M be a matching in a graph G and let OSC be an odd-set cover of G. For any i ≥ 0, let n(i) be the number of nodes labeled i. If |M| = n(1) + ∑i ≥ 2(n(i) div 2), then M is a maximum cardinality matching. BibTeX: ```@article{Max-Card-Matching-AFP, author = {Christine Rizkallah}, title = {Maximum Cardinality Matching}, journal = {Archive of Formal Proofs}, month = jul, year = 2011, note = {\url{http://afp.sf.net/entries/Max-Card-Matching.shtml}, Formal proof development}, ISSN = {2150-914x}, }``` License: BSD License
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8195903301239014, "perplexity": 631.9193574839725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999652862/warc/CC-MAIN-20140305060732-00026-ip-10-183-142-35.ec2.internal.warc.gz"}
https://chat.stackexchange.com/transcript/10243?m=54648671
12:20 PM Old posts which have been bumped - searching for created:..2020-01-01 (or something similar) and choosing to sort by activity. 12:51 PM 0 Should we blacklist or delete the research tag? This site is about mathematical research, so I don't see how the "research" tag is helpful, any more than a "mathematics" tag would be helpful. Of the 23 questions with the tag, 7 have been closed, and all 23 have other tags. Alternatively, someo... 1:01 PM It seems that the research tag was created in 2017. Jun 13 '17 at 11:01, by Martin Sleziak I see that a new tag has been created: As mentioned in the post, there are 23 questions with the tag. 2 What are the names of the research journals that focus on convex geometry? I know of "Advances in Geometry" and "Discrete & Computational Geometry" but no others. Context of question: I am a graduate student who recently submitted a paper to a geometry research journal. They did not accept my... 2 Since the natural logarithm, i.e. with base $e$, is very commonly used in research papers and that both $\ln(x)$ and $\log(x)$ are used to denote it, it is natural* to ask which of these notations to use when preparing a paper. The fact that both are used in literature concerning the same topics ... 3 Note: My first question in this site. MSE users asked me to post this question here for better response I am a Masters student in Pure Mathematics. I need to do a Masters thesis in "Linear Algebra in Graph Theory" where I will have to publish some original work in our Departmental Journal mean... 80 I’m working on a paper that makes heavy use of colorful diagrams to supplement the text. For most of these it would probably not be possible to create grayscale versions that convey the same information as effectively. I’m a bit worried about this because (1) I imagine that some people like to pr... 7 I am a PhD student at one university and an invited professor at a second, i.e. I do not have a permanent position in the second one. Now I need to indicate an affiliation in a journal paper but I do not know whether to indicate or not to indicate a university where I am an invited professor. ... 2 I have done my Master's degree in mathematics and currently I am doing Ph.D. in mathematics in India. I have completed my course work and my supervisor works in Topological Groups. He suggests me some books for reading like Topological Groups and Related Structures by A. V. Arkhangelʹski... 2 The title says most of it really. How constrained are you in an academic career based on the topic of your PhD thesis? Is it plausible for someome to be researching in an area of math vastly different from their thesis topic later in an academic career, or does it largely determine the scope of r... 1 I'm developing a theory that (I think) is pretty good and something new. I'm thinking of contacting a respected researcher (one of the best in the field), mainly because I'm not 100% that this has not been already done (perhaps under another context) or perhaps the theory even isn't new, but equi... 11 Admittedly, this is a soft question. My own experience in mathematical research has been of long periods of research, mostly characterized in long "blocks" and sporadic breakthrough. How does this image fit with research semesters, in which many researchers (veterans, early careers, postdocs a... 2 It's been a few months since I've been curious about mean curvature flow and now I'm reading Robert Haslhofer's lecture notes. I like the subject and would like to do research on it, but I know neither where to start nor current research trends. Can anyone explain the main threads of research in... 1 Suppose $X\sim \text{Bin}(2n,p)$ and $X_1,X_2\sim\text{Bin}(n,p)$ are independent, with $X_1+X_2=X$. I'm interested in the rate of convergence for the absolute difference $$\left\vert P(X>c|X_1\leq a,X_2\leq a)-P(X>c)\right\vert$$ where $a$ and $c$ are constants in a neighborhood of $np$ and \$2... 0 The solution exact solution of the Helmholtz equation for the scattering of waves by a sphere is relatively straightforward and has been known since the time of Lord Rayleigh. The exact solution of the wave equation for scattering of waves by a sphere is much trickier and has been dealt with in ... 1 I am writing a paper for an independent study course I am enrolled in. I have contributed all of the work to the project. My supervisor has been a guiding force during this research, suggesting things here and there. These suggestions haven't changed the fundamental focus of the project. When di... 8 How much truth is there to Chomsky's remark that "mathematicians stop working when things get too difficult"? https://youtu.be/atupfHizJxM?t=453 Is this true in your own work? How do you know you are finished with a given line of inquiry (at least for the time being) and it's time to publish t... 7 I've taken a graduate course in model theory and I like it so much that I can imagine doing research in this area. Are there survey articles or review papers on the current research topics in model theory? Where can I find them? Also, I wish there is literature about the common proof techniques ... 25 I had a discussion with my advisor about what am I interested as my future research direction and I said it is special functions and q-series. He laughed and said that the topic is essentially dead and the people who study it are dinosaurs. I'm really confused by this statement and don't know wha... 14 As it is reasonable to think the work of mathematicians will be developed/made in their offices of universities (or in eventual seminars or conferences), here are the colleagues, books and journals, connection to databases and blackboards. My belief is that a great part of mathematicians conti... 37 Admittedly, a soft-question. I, being a very young researcher (PhD student) have personally faced the following situation many times: You delve into a problem desperately. No progress for a very long while. All of a sudden, you get the light, and boom: the result is proven. Looking in retrospect... 6 I would like to know sources, articles, books or other, that provide information on ethical aspects in the research of mathematics, I wondered what is the literature that this community knows about ethical issues and proposals that were proposed in the context of mathematical research. Ques... 5 A soft question. I am a PhD student, at early stages of my academic career; and have personally experienced the following many times. Sometimes you come up with a result, that you are not quite happy with (for whatever reason, maybe you think is not novel, or is simple, or has no applications, ... 24 I would like to know as curiosity how the editorial board or editors* of a mathematical journal evaluate the quality, let's say in colloquial words the importance, of papers or articles. Question. I would like to know how is evaluate the quality of an article submitted in a journal. Are there... 3 I want to get into some of the big classification problems in algebraic geometry, but have a very broad question. Ultimately we would like to classify all varieties over some field up to isomorphism, and this is done via moduli theory. I am studying the moduli of elliptic curves at the moment. Fr... 37 (This is a restatement of a question asked on the Mathematics.SE, where the solutions were a bit disappointing. I'm hoping that professional mathematicians here might have a better solution.) What are some problems in pure mathematics that require(d) solution techniques from the broadest and ... As mentioned in the post, 7 of those questions are closed. That is 58.93% of questions in this tag was deleted, here are stats for all tags: data.stackexchange.com/mathoverflow/query/1162648/… With 30.43% closed, it is among the tags with high percentage of closed questions: data.stackexchange.com/mathoverflow/query/1162425/… 1:30 PM I see that the above query with percentage of deleted questions has several tags which were removed (and thus have 100% of questions deleted) in the first places. If we're not interested in those: data.stackexchange.com/mathoverflow/query/1251170/… 2:10 PM Of course, if we look only at tags with many questions, then there aren't so many deleted tags among them: data.stackexchange.com/mathoverflow/query/1162648/… data.stackexchange.com/mathoverflow/query/1251170/… However, since I wanted to include , I had to use a lower threshold. 2 hours later… 4:26 PM I'd support deleting it, it has no coherent or useful use I can think of. It's small enough to be done manually, btw (unfortunately moderators mostly ignore now requests about tags). — YCor 2 hours ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5328085422515869, "perplexity": 458.9368787597466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500719.31/warc/CC-MAIN-20230208060523-20230208090523-00484.warc.gz"}
https://en.wikiversity.org/wiki/Vectors_and_coordinates
# Vectors and coordinates Chandra image of LP 944-20 before flare and during flare. Credit: Marshall Space Flight Center/NASA. This problem set is devoted to a variety of vector situations and coordinates for evaluation. ## Problem 1 For standard basis, or unit, vectors (i, j, k) and vector components of a (ax, ay, az), what are the right ascension, declination, and value of a: If the x-axis is the longitude of the Greenwich meridian, and ax equals ay, then RA equals? If ax equals ay equals az, then the declination is? The value of a is given by? ## Problem 2 For standard basis, or unit, vectors (i, j, k) and vector components of a (ax, ay, az), what are the right ascension, declination, and value of a: for ax equals ay equals az If the x-axis is the longitude of the Greenwich meridian, and the object is at ax and 2ay, then RA equals? ax and 2ay and 3az, then the declination is? The value of a is given by? ## Problem 3 For standard basis, or unit, vectors (i, j, k) and vector components of a (ax, ay, az), what are the right ascension, declination, and value of a: for ax equals ay equals az If the x-axis is the longitude of the Greenwich meridian, and the object is at 3ax and 4ay, then RA equals? 3ax and 4ay and 5az, then the declination is? The value of a is given by? ## Problem 4 For standard basis, or unit, vectors (i, j, k) and vector components of a (ax, ay, az), what are the right ascension, declination, and value of a: for ax equals 2ay equals 3az If the x-axis is the longitude of the Greenwich meridian, and the object is at 3ax and 4ay, then RA equals? 3ax and 4ay and 5az, then the declination is? The value of a is given by? ## Problem 5 An object has RA 10h 10m 10s Dec -20° 20' 20" and r = 23 lyrs. What are ax, ay, and az? What are ℓ and b? What are the ecliptic longitude and latitude? What are J1855 and B1855? What are J2100 and B2100? ## Problem 6 An object has coordinates: 125.678 -85.678 and r = 110 pc. What are RA and Dec? What are ax, ay, and az? What are ℓ and b? What are the ecliptic longitude and latitude? What are J1800 and B1800? What are J2075 and B2075? ## Problem 7 For standard basis, or unit, vectors (i, j, k) and vector components of a (ax, ay, az), for ax equals 2ay equals 3az: If the x-axis is the longitude of the Greenwich meridian, and the object is at a = 3ax and b = 4ay, then what is ${\displaystyle a\times b?}$ ${\displaystyle a\cdot b?}$ ## Problem 8 Standard basis, or unit, vectors are (i, j, k) for vector components of a (ax, ay, az). Let ax equal 3ay equal 4az. What are ${\displaystyle [5a_{x},6a_{y},7a_{z}]\cdot [5a_{x},6a_{y},7a_{z}]?}$ If a = [5a_x, 6a_y, 7a_z] and b = [8a_x, 9a_y, 10a_z], then ${\displaystyle a\times b?}$ ## Hypotheses 1. The square root of negative one is not needed in vector space.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47378405928611755, "perplexity": 6465.764888486493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991737.39/warc/CC-MAIN-20210514025740-20210514055740-00293.warc.gz"}
http://howtolose10poundsinamonth.info/taps-for-maple-trees/taps-for-maple-trees-maple-how-to-tap-a-maple-tree-for-making-maple-syrupthe-art-of-doing-stuff-its-a-sappy-miracle/
# Taps For Maple Trees Maple How To Tap A Maple Tree For Making Maple Syrupthe Art Of Doing Stuff Its A Sappy Miracle taps for maple trees maple how to tap a maple tree for making maple syrupthe art of doing stuff its a sappy miracle. tapping birch and black walnut trees for syrup blains farm tapping birch trees in the forest maple , ways to tap a tree for maple syrup wikihow image titled tap a tree for maple syrup step , how to make maple syrup when is maple tapping season done sugarmaking mapletappingremovingspilemapletapperbjpg, funks grove farm brings illinois maple syrup from tree to tabletop funks grove maple sirup,when is the best time for sugarmakers to tap their maple trees when is the best time for sugarmakers to tap their maple trees, when will the sap start flowing vergas designates official maple wait for it the sap will soon start flowing for the vergas areas annual maple syrup season which typically runs during the month of march designated , tree tapping demonstration penbay pilot have you ever wanted to learn how to tap maple trees and then boil sap at home to make maple syrup aldermere farm in rockport will hold a tree tapping , about nh maple syrup new hampshire maple experience maple tree tapping maple tree taps , preparation tap my trees maple sugaring for the hobbyist maple the most effective way to identify maple trees is to create a map of your yard and record each type of tree or at least the maples if you try to tap , ohio thoughts sugaring or tapping maple trees ohio thoughts sugaring or tapping maple trees, tapping a sugar maple tree for maximum syrup set up tap hole drill tapping a sugar maple tree for maximum syrup set up tap hole drill bit sizes bucket tips.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9917413592338562, "perplexity": 13333.802726895381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573309.22/warc/CC-MAIN-20190918151927-20190918173927-00401.warc.gz"}
https://brilliant.org/problems/average-dice-game/
# Average dice game Consider the following game: you can roll a pair of regular dice up to $$4$$ times, and after each roll you can either stop and get paid as many coins as the points you scored or roll again the two dice. After the last roll you're obliged to take whatever amount you scored last. Assuming that you're using the best strategy possible, what's your average win? Details and assumptions Assuming that your average win is the simple fraction $$\frac{a}{b}$$, type the sum $$a+b$$ as your answer. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5364874005317688, "perplexity": 864.1239735501641}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826840.85/warc/CC-MAIN-20171023221059-20171024001059-00520.warc.gz"}
http://tex.stackexchange.com/questions/54774/onehalfspacing-in-titlepage
# onehalfspacing in titlepage How to make onehalfspacing in titlepage? I tried to add \onehalfspacing command in different places there, however, no spaces were produced. \begin{titlepage} {\fontsize{22}{22}\centering\bfseries\MakeUppercase{Lorem ipsum dolor sit amet, \end{titlepage} - You might want to add a paragraph break (\par) at the end of the group. Also the second number of \fontsize is the baselineskip, i.e. the line spacing, which is normally 20% larger than the font size (the first number). So use {\fontsize{22}{27}\centering ... \par}. The 27 should probably be even higher. I wouldn't recommend to use \fontsize at all, but \Huge and the setspace package. –  Martin Scharrer May 7 '12 at 13:10 Delete the brackets around your sentence: \documentclass[pagesize]{scrartcl} \usepackage{setspace} \begin{document} \begin{titlepage} \onehalfspacing \fontsize{22}{22}\centering\bfseries\MakeUppercase{Lorem ipsum dolor sit amet, consectetur adipisicing elit} \end{titlepage} \end{document} - As stated above \fontsize{22}{22} isn't right and might interfere with setspace. The second number should be at least 20% higher. –  Martin Scharrer May 7 '12 at 13:10 @Keks Dose Thanks. It really works but I have another problem now. I want to write some not bolded text after this and \bfseries without brackets around makes not a good job. I cannot change it to \textbf because of \MakeUppercase. How should I solve this? –  Pavasaris May 7 '12 at 13:11 @myname: Try it with the braces but with a \par before }. –  Martin Scharrer May 7 '12 at 13:14 @Martin Scharrer: Thanks! Works fine. –  Pavasaris May 7 '12 at 13:17 @myname If possible and if it seems adequate to you, please hit the "answer accepted" button on the left side of my answer. –  Keks Dose May 7 '12 at 15:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8832509517669678, "perplexity": 3258.3979311066246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987552.57/warc/CC-MAIN-20150728002307-00180-ip-10-236-191-2.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/help-on-a-dynamics-kinematics-problem.569915/
Homework Help: Help on a dynamics/kinematics problem. 1. Jan 22, 2012 Abdilatif 1. The problem statement, all variables and given/known data http://myclass.peelschools.org/sec/11/22607/Lessons/Unit 2 Dynamics/Review Chapter 2.pdf Question 12, a and b? 2. Relevant equations F=ma , Force = mass* acceleration a=g(m2-usm2)/(m1+m2), g=gravity, us=coefficient of static friction, m1=mass1, m2=mass2 3. The attempt at a solution a=F/m so the mass would be 6Kg, if i use the mass of the whole system. The force, would be only for the mass being acted on, which is F=mg, so it is F=2*9.8=19.6N. That is the force acting on the system. So back to the first equation, F=ma, 19.6=6a, simple math and a=3.3m/s. The book has the same answer. Though I am puzzled to why they use T=ma instead of using F=ma, what does T stand for? Also by doing some more math, you know that the general equation for a = g(m2-usm2) / m1+m2, g=gravity, us=coefficient of static friction. So, then when I plug in my values I get a negative value is that right? Either way, in which cases are acceleration negative, when accelerating backwards, or when falling? 2. Nov 11, 2012 lingualatina Just to alert you, your link is broken, but if you have the same answer as the book, you're fine. You asked why the book used T instead of F. T stands for tension, which is a type of force that seeks to pull something apart; basically, it can be found in strings or pulleys, which I'm guessing this problem is about. As for the negative vales for acceleration, it depends on how you assign your coordinate system. Most commonly, the left (negative x) direction is negative, and the down (negative y) direction is negative also, leaving the right (positive x) and up (positive y) directions to refer to positive values. Let's say you have a system accelerating downwards toward the ground under the influence of gravity, and a resistive force of some kind (maybe air friction) acts on it to slow its acceleration. Furthermore, let's use the sign convention I mentioned before, where down is negative and up is positive. In this case, the value of g would be considered negative, whereas the value of acceleration the air friction causes is considered positive. If the system is still accelerating downward even when air friction acts on it, then it is moving in the downward direction (the same direction as g) and is thus considered negative. So receiving a negative value for your answer doesn't mean it's wrong; it just all depends on which directions you call negative and which you call positive. So in the general equation you gave, if you decided g was negative and the system was still accelerating in the direction of g, then the system's acceleration was also negative. 3. Nov 11, 2012 Abhinav R T stands for Tension which is given by the downward component mg
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8335044384002686, "perplexity": 663.4169321107721}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213508.60/warc/CC-MAIN-20180818075745-20180818095745-00359.warc.gz"}
https://www.openstarts.units.it/handle/10077/4351
Please use this identifier to cite or link to this item: http://hdl.handle.net/10077/4351 Title: Weierstrass points, inflection points and ramification points of curves Authors: Ballico, Edoardo Issue Date: 1998 Publisher: Università degli Studi di Trieste. Dipartimento di Scienze Matematiche Source: Edoardo Ballico, "Weierstrass points, inflection points and ramification points of curves", in: Rendiconti dell’Istituto di Matematica dell’Università di Trieste. An International Journal of Mathematics, 30 (1998), pp. 141-154. Series/Report no.: Rendiconti dell’Istituto di Matematica dell’Università di Trieste. An International Journal of Mathematics30 (1998) Abstract: Let C be an integral curve of the smooth projective surface S andP $\epsilon$ C. Let $\pi:X\rightarrow C$ be the normalization and$Q\epsilon X$ with $\pi\left(Q\right)=P$. We are interested in thecase in which Q is a Weierstrass point of X. We compute the semigroupN(Q, X) of non-gaps of Q when S is a Hirzebruch surface $F_{e}P\epsilon C_{reg}$and P is a total ramification point of the restriction to C of a ruling$F_{e}\rightarrow P^{1}$. We study also families of pairs (X, Q)such that the first two integers of N( Q, X) are k and d. To do thatwe study families of pairs (P,C) with C plane curve, deg(C) =d, Chas multiplicity d - k at P, C is unibranch at P and a line throughP has intersection multiplicity d with C at P. Type: Article URI: http://hdl.handle.net/10077/4351 ISSN: 0049-4704 Appears in Collections: Rendiconti dell'Istituto di matematica dell'Università di Trieste: an International Journal of Mathematics vol.30 (1998) ###### Files in This Item: File Description SizeFormat CORE Recommender #### Page view(s) 915 checked on Dec 11, 2019
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.919041097164154, "perplexity": 3050.468395420712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540548537.21/warc/CC-MAIN-20191213020114-20191213044114-00103.warc.gz"}
http://mathhelpforum.com/calculus/13449-solved-radius-interval-convergence-1-a.html
# Math Help - [SOLVED] Radius and interval of convergence 1 1. ## [SOLVED] Radius and interval of convergence 1 Please check my solution and tell me what you guys think of it in general: Attached Thumbnails 2. What is the alternating series test for divergence But I got the same thing as you did. 3. Originally Posted by ThePerfectHacker What is the alternating series test for divergence But I got the same thing as you did. sorry about that, i was reusing a template i created for a previous problem and forgot to delete the "for divergence"
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8806666135787964, "perplexity": 879.9812893709135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928923.21/warc/CC-MAIN-20150521113208-00225-ip-10-180-206-219.ec2.internal.warc.gz"}
https://worldwidescience.org/topicpages/o/ouachita+lithospheric+seismology.html
Sample records for ouachita lithospheric seismology 1. Seismological Constraints on Lithospheric Evolution in the Appalachian Orogen Science.gov (United States) Fischer, K. M.; Hopper, E.; Hawman, R. B.; Wagner, L. S. 2017-12-01 Crust and mantle structures beneath the Appalachian orogen, recently resolved by seismic data from the EarthScope SESAME Flexible Array and Transportable Array, provide new constraints on the scale and style of the Appalachian collision and subsequent lithospheric evolution. In the southern Appalachians, imaging with Sp and Ps phases reveals the final (Alleghanian) suture between the crusts of Laurentia and the Gondwanan Suwannee terrane as a low angle (Kellogg, 2017) isostatic arguments indicate crustal thicknesses were 15-25 km larger at the end of the orogeny, indicating a thick crustal root across the region. The present-day residual crustal root beneath the Blue Ridge mountains is estimated to have a density contrast with the mantle of only 104±20 kg/m3. This value is comparable to other old orogens but lower than values typical of young or active orogens, indicating a loss of lower crustal buoyancy over time. At mantle depths, the negative shear velocity gradient that marks the transition from lithosphere to asthenosphere, as illuminated by Sp phases, varies across the Appalachian orogen. This boundary is shallow beneath the northeastern U.S. and in the zone of Eocene volcanism in Virginia, where low velocity anomalies occur in the upper mantle. These correlations suggest recent active lithosphere-asthenosphere interaction. 2. Probing the Cypriot Lithosphere: Insights from Broadband Seismology Science.gov (United States) Ogden, C. S.; Bastow, I. D.; Pilidou, S.; Dimitriadis, I.; Iosif, P.; Constantinou, C.; Kounoudis, R. 2017-12-01 Cyprus, an island in the eastern Mediterranean Sea, is an ideal study locale for understanding both the final stages of subduction, and the internal structure of so-called ophiolites' - rare, on-land exposures of oceanic crust. The Troodos ophiolite offers an excellent opportunity to interrogate a complete ophiolite sequence from mantle rocks to pillow lavas. However, determining its internal architecture, and that of the subducting African plate deep below it, cannot be easily achieved using traditional field geology. To address this issue, we have built a new network of five broadband seismograph stations across the island. These, along with existing permanent stations, record both local and teleseismic earthquakes that we are now using to image Cyprus' crust and mantle seismic structure. Receiver functions are time series, computed from three-component seismograms, which contain information about lithospheric seismic discontinuities. When a P-wave strikes a velocity discontinuity such as the Moho, energy is converted to S-waves (direct Ps phase). The widely-used H-K Stacking technique utilises this arrival, and subsequent crustal reverberations (PpPs and PsPs+PpSs), to calculate crustal thickness (H) and bulk-crustal Vp/Vs ratio (K). Central to the method is the assumption that the Moho produces the largest amplitude conversions, after the direct P-arrival, which is valid where the Moho is sharp. Where the Moho is gradational or upper crustal discontinuities are present, the Moho signals are weakened and masked by shallow crustal conversions, potentially rendering the H-K stacking method unreliable. Using a combination of synthetic and observed seismograms, we explore Cyprus' crustal structure and, specifically, the reliability of the H-K method in constraining it. Data quality is excellent across the island, but the receiver function Ps phase amplitude is low, and crustal reverberations are almost non-existent. Therefore, a simple, abrupt wavespeed jump at the 3. Insights on the lithospheric structure of the Zagros mountain belt from seismological data analysis Science.gov (United States) Paul, A.; Kaviani, A.; Vergne, J.; Hatzfeld, D.; Mokhtari, M. 2003-04-01 As part of a French-Iranian collaboration, we installed a temporary seismological network across the Zagros for 4.5 months in 2000-2001 to investigate the lithospheric structure of the mountain belt. The network included 65 stations located along a 600-km long line (average spacing of ˜10 km) from the coast of the Persian Gulf to the stable block of Central Iran. A migrated depth cross-section computed from radial receiver functions displays clear P-to-S conversions at the Moho beneath most of the profile. The average Moho depth is 45 to 50 km beneath the folded belt. It deepens rather abruptly beneath the suture zone of the MZT (Main Zagros Thrust) and the Sanandaj-Sirjan (SS) metamorphic zone. The maximum crustal thickness of ˜65 km is reached 50 km NE of the surface trace of the MZT. The region of over-thickened crust is shifted to the NE with respect to the areas of highest elevations and the strongest negative Bouguer anomaly. To the NE, the crust of the block of Central Iran is 40-km thick on average. Two patches of Ps converted energy can be seen below the Moho in the northern half of the transect that cannot be attributed to multiple reflections. Teleseismic P residual travel time curves display lateral variations as large as 1.5 s with both long (faster arrivals in the SW than in the NE) and short-scale variations (in the MZT region). They were inverted for variations of P wave velocity with the ACH technique. The crustal layer exhibits rather strong lateral variations of Vp with lower velocities under the MZT and the Urumieh-Dokhtar magmatic assemblage, and faster velocities under the SS zone. In the mantle, a clear difference appears between the faster P wave velocities of the Arabian craton and the relatively lower velocities of the mantle of Central Iran. 4. Seismology International Nuclear Information System (INIS) 1980-01-01 The Norwegian Seismic Array (NORSAR) has in 1979 worked mainly on reports and investigations for the seismological expert group established in 1976 by the UN Disarmament Committee in Geneva. One of NORSAR's staff is scientific secretary for the group. Reports published by the group in 1978 and 1979 proposed a global surveillance system for nuclear explosions and NORSAR as one of the largest stations will play a central role in the proposed network. A number of other tasks have been performed by NORSAR in connection with the seismology and tectonics of the Norwegian continental shelf, a projected dam in Tanzania, a dam in S.W.Norway, seismic activityin Spitzbergen and ore prospecting in N.Norway. (JIW) 5. Seismic anisotropy of the mantle lithosphere beneath the Swedish National Seismological Network (SNSN) Czech Academy of Sciences Publication Activity Database Eken, T.; Plomerová, Jaroslava; Roberts, R.; Vecsey, Luděk; Babuška, Vladislav; Shomali, H.; Bodvarsson, R. 2010-01-01 Roč. 480, č. 1-4 (2010), s. 241-258 ISSN 0040-1951 R&D Projects: GA AV ČR IAA300120709; GA AV ČR(CZ) KJB300120605 Institutional research plan: CEZ:AV0Z30120515 Keywords : Baltic Shield * mantle lithosphere * seismic anisotropy * domains and their boundaries in the mantle Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 2.509, year: 2010 6. Extraterrestrial seismology CERN Document Server Tong, Vincent C H 2015-01-01 Seismology is a highly effective tool for investigating the internal structure of the Earth. Similar techniques have also successfully been used to study other planetary bodies (planetary seismology), the Sun (helioseismology), and other stars (asteroseismology). Despite obvious differences between stars and planetary bodies, these disciplines share many similarities and together form a coherent field of scientific research. This unique book takes a transdisciplinary approach to seismology and seismic imaging, reviewing the most recent developments in these extraterrestrial contexts. With contributions from leading scientists, this timely volume systematically outlines the techniques used in observation, data processing, and modelling for asteroseismology, helioseismology, and planetary seismology, drawing comparisons with seismic methods used in geophysics. Important recent discoveries in each discipline are presented. With an emphasis on transcending the traditional boundaries of astronomy, solar, planetary... 7. Rotational seismology Science.gov (United States) Lee, William H K. 2016-01-01 Rotational seismology is an emerging study of all aspects of rotational motions induced by earthquakes, explosions, and ambient vibrations. It is of interest to several disciplines, including seismology, earthquake engineering, geodesy, and earth-based detection of Einstein’s gravitation waves.Rotational effects of seismic waves, together with rotations caused by soil–structure interaction, have been observed for centuries (e.g., rotated chimneys, monuments, and tombstones). Figure 1a shows the rotated monument to George Inglis observed after the 1897 Great Shillong earthquake. This monument had the form of an obelisk rising over 19 metres high from a 4 metre base. During the earthquake, the top part broke off and the remnant of some 6 metres rotated about 15° relative to the base. The study of rotational seismology began only recently when sensitive rotational sensors became available due to advances in aeronautical and astronomical instrumentations. 8. Ozark-Ouachita Highlands Assessment: Aquatic Conditions Science.gov (United States) Forest Service U.S. Department of Agriculture 1999-01-01 This publication provides citizens, private and public organizations, scientists, and others with information about the aquatic conditions in or near national forests in the Ozark-Ouachita Highlands: the Mark Twain in Missouri, the Ouachita in Arkansas and Oklahoma, and the Ozark-St. Francis National Forests in Arkansas. This report includes water quality analyses,... 9. Citizen Seismology Science.gov (United States) Bossu, Rémy; Gilles, Sébastien; Mazet-Roux, Gilles; Kamb, Linus; Frobert, Laurent 2010-05-01 In science, projects which involve volunteers for observations, measurements, computation are grouped under the term, Citizen Science. They range from bird or planet census to distributing computing on volonteers's computer. Over the last five years, the EMSC has been developing tools and strategy to collect information on earthquake's impact from the first persons to be informed, i.e. the witnesses. By extension, it is named Citizen Seismology. The European Mediterranean Seismological Centre (EMSC), a scientific not-for-profit NGO, benefits from the high visibility of its rapid earthquake information services (www.emsc-csem.org) which attract an average of more than half a million visits a month from 160 countries. Witnesses converge to its site within a couple of minutes of earthquake's occurrence to find out information about the cause of the shaking they have just been through. The convergence generates brutal increases of hit rate which can be automatically detected. They are often the first indication about the occurrence of a felt event. Witnesses' locations are determined from their IP addresses. Localities exhibiting statistically significant increase of traffic are mapped to produce the "felt map". This map available within 5 to 8 minutes of the earthquake's occurrence represents the area where the event was felt. It is the fastest way to collect in-situ information on the consequences of an earthquake. Widespread damage region are expected to be mapped through a significant lack or absence of visitors. A second tool involving the visitors is an online macroseismic questionnaire available in 21 languages. It complements the felt maps as it can describes the level of shaking or damage, but is only available in 90 to 120 minutes. Witnesses can also share their pictures of damage. They used it also to provide us exceptional pictures of transient phenomena. With the University of Edinburgh, we are finalising a prototype named ShakemApple, linking Apple 10. Volcano seismology Science.gov (United States) Chouet, B. 2003-01-01 A fundamental goal of volcano seismology is to understand active magmatic systems, to characterize the configuration of such systems, and to determine the extent and evolution of source regions of magmatic energy. Such understanding is critical to our assessment of eruptive behavior and its hazardous impacts. With the emergence of portable broadband seismic instrumentation, availability of digital networks with wide dynamic range, and development of new powerful analysis techniques, rapid progress is being made toward a synthesis of high-quality seismic data to develop a coherent model of eruption mechanics. Examples of recent advances are: (1) high-resolution tomography to image subsurface volcanic structures at scales of a few hundred meters; (2) use of small-aperture seismic antennas to map the spatio-temporal properties of long-period (LP) seismicity; (3) moment tensor inversions of very-long-period (VLP) data to derive the source geometry and mass-transport budget of magmatic fluids; (4) spectral analyses of LP events to determine the acoustic properties of magmatic and associated hydrothermal fluids; and (5) experimental modeling of the source dynamics of volcanic tremor. These promising advances provide new insights into the mechanical properties of volcanic fluids and subvolcanic mass-transport dynamics. As new seismic methods refine our understanding of seismic sources, and geochemical methods better constrain mass balance and magma behavior, we face new challenges in elucidating the physico-chemical processes that cause volcanic unrest and its seismic and gas-discharge manifestations. Much work remains to be done toward a synthesis of seismological, geochemical, and petrological observations into an integrated model of volcanic behavior. Future important goals must include: (1) interpreting the key types of magma movement, degassing and boiling events that produce characteristic seismic phenomena; (2) characterizing multiphase fluids in subvolcanic 11. Lithospheric processes Energy Technology Data Exchange (ETDEWEB) Baldridge, W. [and others 2000-12-01 The authors used geophysical, geochemical, and numerical modeling to study selected problems related to Earth's lithosphere. We interpreted seismic waves to better characterize the thickness and properties of the crust and lithosphere. In the southwestern US and Tien Shari, crust of high elevation is dynamically supported above buoyant mantle. In California, mineral fabric in the mantle correlate with regional strain history. Although plumes of buoyant mantle may explain surface deformation and magmatism, our geochemical work does not support this mechanism for Iberia. Generation and ascent of magmas remains puzzling. Our work in Hawaii constrains the residence of magma beneath Hualalai to be a few hundred to about 1000 years. In the crust, heat drives fluid and mass transport. Numerical modeling yielded robust and accurate predictions of these processes. This work is important fundamental science, and applies to mitigation of volcanic and earthquake hazards, Test Ban Treaties, nuclear waste storage, environmental remediation, and hydrothermal energy. 12. Lithospheric processes International Nuclear Information System (INIS) Baldridge, W.S. 2000-01-01 The authors used geophysical, geochemical, and numerical modeling to study selected problems related to Earth's lithosphere. We interpreted seismic waves to better characterize the thickness and properties of the crust and lithosphere. In the southwestern US and Tien Shari, crust of high elevation is dynamically supported above buoyant mantle. In California, mineral fabric in the mantle correlate with regional strain history. Although plumes of buoyant mantle may explain surface deformation and magmatism, our geochemical work does not support this mechanism for Iberia. Generation and ascent of magmas remains puzzling. Our work in Hawaii constrains the residence of magma beneath Hualalai to be a few hundred to about 1000 years. In the crust, heat drives fluid and mass transport. Numerical modeling yielded robust and accurate predictions of these processes. This work is important fundamental science, and applies to mitigation of volcanic and earthquake hazards, Test Ban Treaties, nuclear waste storage, environmental remediation, and hydrothermal energy 13. Satellite Remote Sensing in Seismology. A Review Directory of Open Access Journals (Sweden) Andrew A. Tronin 2009-12-01 Full Text Available A wide range of satellite methods is applied now in seismology. The first applications of satellite data for earthquake exploration were initiated in the ‘70s, when active faults were mapped on satellite images. It was a pure and simple extrapolation of airphoto geological interpretation methods into space. The modern embodiment of this method is alignment analysis. Time series of alignments on the Earth's surface are investigated before and after the earthquake. A further application of satellite data in seismology is related with geophysical methods. Electromagnetic methods have about the same long history of application for seismology. Stable statistical estimations of ionosphere-lithosphere relation were obtained based on satellite ionozonds. The most successful current project "DEMETER" shows impressive results. Satellite thermal infra-red data were applied for earthquake research in the next step. Numerous results have confirmed previous observations of thermal anomalies on the Earth's surface prior to earthquakes. A modern trend is the application of the outgoing long-wave radiation for earthquake research. In ‘80s a new technology—satellite radar interferometry—opened a new page. Spectacular pictures of co-seismic deformations were presented. Current researches are moving in the direction of pre-earthquake deformation detection. GPS technology is also widely used in seismology both for ionosphere sounding and for ground movement detection. Satellite gravimetry has demonstrated its first very impressive results on the example of the catastrophic Indonesian earthquake in 2004. Relatively new applications of remote sensing for seismology as atmospheric sounding, gas observations, and cloud analysis are considered as possible candidates for applications. 14. Seismology of the Jupiter International Nuclear Information System (INIS) Vorontsov, S.V.; Gudkova, T.V.; Zharkov, V.N. 1989-01-01 The structure and diagnostic properties of the spectrum of free oscillations of the models of the Jupiter are discussed. The spectrum is very sensitive to the properties of the inner core and density discontinuities in the interior of the planet. It is shown that in seismology of the Jupiter unlike to solar seismology, it is not possible to use the asymptotic theory for investigation of the high-frequency part of the acoustic spectrum 15. Quality of water resources of the Ouachita National Forest, Arkansas Science.gov (United States) Cole, Elizabeth F.; Morris, E.E. 1986-01-01 Surface water and groundwater quality was documented in the Ouachita National Forest by collecting surface water quality data at 15 points and groundwater quality data at 11 sites from April 1984 through August 1985. The data were compared to drinking water standards and the results are tabulated. Surface water in the Ouachita National Forest is relatively abundant. It is low in mineralization and chemically suitable for most uses with minimal treatment. Groundwater is relatively scarce. The low yields of wells limit the use of groundwater primarily to domestic use. The water is chemically suitable for most purposes but may require treatment for the removal of iron. (Peters-PTT) 16. Broad band seismology in the Scotia region. The base Esperanza seismological observatory International Nuclear Information System (INIS) Russi, M.; Costa, G.; Febrer, J. 1995-08-01 The lithospheric study and the identification of relevant lateral heterogeneities in the Antarctic continent and borderlands, is essential to understand the geodynamic evolution both of the continental and oceanic bordering regions. The complexity of the geological evolution and the structural properties of the lithosphere in the Scotia area have been stressed by many authors. The present setting of the area is the result of the mutual interaction among the Antarctic, South American and several minor plants whose geodynamic history and actual boundaries are still partially unknown. The intense seismic activity that characterizes the region encourages the use of the seismological approach to investigate the lithospheric structure of the area. Since January 1992 a broad band three components station is operating at the Antarctic base Esperanza in the NE area of Antarctic Peninsula. The station has been installed with financial support of the Italian Programma Nazionale di Ricerche in Antartide (PNRA) by Osservatorio Geofisico Sperimentale (OGS) and Instituto Antartico Argentino (IAA). Russi et al. (1994) have analyzed selected recordings using the frequency-time analysis (FTAN) method obtaining some relevant information on the large scale structure of the lithosphere in the Scotia region even if data recorded by a single station were available. The extension of our analysis to further events and to horizontal component records is here presented. Within the framework of the international co-operation to the Antarctic Seismographic Network, the OGS and the IAA are upgrading the Esperanza station and installing an additional broad band station near the town of Ushuaia (Tierra del Fuego, Argentina) with the financial support of PNRA. The inversion of the dispersion curves through the FTAN of the signals recorded by an increased number of stations and generated by events with source-station paths spanning the region will allow us to extract the elastic and anelastic 17. Jesuits in seismology Science.gov (United States) Linehan, D. 1984-01-01 Jesuits have been involved with scientific endeavors since the 16th century, although their association with seismology is more recent. What impelled Jesuit priests to also become seismologists is am matter of conjecture. Certainly the migration of missionaries to various parts of the world must have resulted in queries to their fellow Jesuits in Europe. What caused earthquakes? Could they be predicted? Were they connected with the weather? 18. Controlled Noise Seismology KAUST Repository Hanafy, Sherif M. 2015-08-19 We use controlled noise seismology (CNS) to generate surface waves, where we continuously record seismic data while generating artificial noise along the profile line. To generate the CNS data we drove a vehicle around the geophone line and continuously recorded the generated noise. The recorded data set is then correlated over different time windows and the correlograms are stacked together to generate the surface waves. The virtual shot gathers reveal surface waves with moveout velocities that closely approximate those from active source shot gathers. 19. The Colombia Seismological Network Science.gov (United States) Blanco Chia, J. F.; Poveda, E.; Pedraza, P. 2013-05-01 The latest seismological equipment and data processing instrumentation installed at the Colombia Seismological Network (RSNC) are described. System configuration, network operation, and data management are discussed. The data quality and the new seismological products are analyzed. The main purpose of the network is to monitor local seismicity with a special emphasis on seismic activity surrounding the Colombian Pacific and Caribbean oceans, for early warning in case a Tsunami is produced by an earthquake. The Colombian territory is located at the South America northwestern corner, here three tectonic plates converge: Nazca, Caribbean and the South American. The dynamics of these plates, when resulting in earthquakes, is continuously monitored by the network. In 2012, the RSNC registered in 2012 an average of 67 events per day; from this number, a mean of 36 earthquakes were possible to be located well. In 2010 the network was also able to register an average of 67 events, but it was only possible to locate a mean of 28 earthquakes daily. This difference is due to the expansion of the network. The network is made up of 84 stations equipped with different kind of broadband 40s, 120s seismometers, accelerometers and short period 1s sensors. The signal is transmitted continuously in real-time to the Central Recording Center located at Bogotá, using satellite, telemetry, and Internet. Moreover, there are some other stations which are required to collect the information in situ. Data is recorded and processed digitally using two different systems, EARTHWORM and SEISAN, which are able to process and share the information between them. The RSNC has designed and implemented a web system to share the seismological data. This innovative system uses tools like Java Script, Oracle and programming languages like PHP to allow the users to access the seismicity registered by the network almost in real time as well as to download the waveform and technical details. The coverage 20. Controlled Noise Seismology KAUST Repository Hanafy, Sherif M.; AlTheyab, Abdullah; Schuster, Gerard T. 2015-01-01 We use controlled noise seismology (CNS) to generate surface waves, where we continuously record seismic data while generating artificial noise along the profile line. To generate the CNS data we drove a vehicle around the geophone line and continuously recorded the generated noise. The recorded data set is then correlated over different time windows and the correlograms are stacked together to generate the surface waves. The virtual shot gathers reveal surface waves with moveout velocities that closely approximate those from active source shot gathers. 1. Geology and seismology International Nuclear Information System (INIS) Schneider, J.F.; Blanc, B. 1980-01-01 For the construction of nuclear power stations, comprehensive site investigations are required to assure the adequacy and suitability of the site under consideration, as well as to establish the basic design data for designing and building the plant. The site investigations cover mainly the following matters: geology, seismology, hydrology, meteorology. Site investigations for nuclear power stations are carried out in stages in increasing detail and to an appreciable depth in order to assure the soundness of the project, and, in particular, to determine all measures required to assure the safety of the nuclear power station and the protection of the population against radiation exposure. The aim of seismological investigations is to determine the strength of the vibratory ground motion caused by an expected strong earthquake in order to design the plant resistant enough to take up these vibrations. In addition, secondary effects of earthquakes, such as landslides, liquefaction, surface faulting, etc. must be studied. For seashore sites, the tsunami risk must be evaluated. (orig.) 2. Filters and templates: stonefly (Plecoptera) richness in Ouachita Mountains streams, U.S.A Science.gov (United States) Andrew L. Sheldon; Melvin L. Warren 2009-01-01 1. We collected adult stoneflies periodically over a 1-year period at 38 sites in twoheadwater catchments in the Ouachita Mountains, Arkansas, U.S.A. The 43 speciescollected were a subset of the Ozark-Ouachita fauna and the much larger fauna of theeastern U.S.A. We estimated 78–91% species coverage in... 3. Forensic seismology revisited Science.gov (United States) Douglas, A. 2007-01-01 contrast simple, comprising one or two cycles of large amplitude followed by a low-amplitude coda. Earthquake signals on the other hand were often complex with numerous arrivals of similar amplitude spread over 35 s or more. It therefore appeared that earthquakes could be recognised on complexity. Later however, complex explosion signals were observed which reduced the apparent effectiveness of complexity as a criterion for identifying earthquakes. Nevertheless, the AWE Group concluded that for many paths to teleseismic distances, Earth is transparent for P signals and this provides a window through which source differences will be most clearly seen. Much of the research by the Group has focused on understanding the influence of source type on P seismograms recorded at teleseismic distances. Consequently the paper concentrates on teleseismic methods of distinguishing between explosions and earthquakes. One of the most robust criteria for discriminating between earthquakes and explosions is the m b : M s criterion which compares the amplitudes of the SP P waves as measured by the body-wave magnitude m b, and the long-period (LP: ˜0.05 Hz) Rayleigh-wave amplitude as measured by the surface-wave magnitude M s; the P and Rayleigh waves being the main wave types used in forensic seismology. For a given M s, the m b for explosions is larger than for most earthquakes. The criterion is difficult to apply however, at low magnitude (say m b fail. Consequently the AWE Group in cooperation with the University of Cambridge used seismogram modelling to try and understand what controls complexity of SP P seismograms, and to put the m b : M s criterion on a theoretical basis. The results of this work show that the m b : M s criterion is robust because several factors contribute to the separation of earthquakes and explosions. The principal reason for the separation however, is that for many orientations of the earthquake source there is at least one P nodal plane in the teleseismic 4. Seismic imaging of lithospheric discontinuities and continental evolution Science.gov (United States) Bostock, M. G. 1999-09-01 Discontinuities in physical properties within the continental lithosphere reflect a range of processes that have contributed to craton stabilization and evolution. A survey of recent seismological studies concerning lithospheric discontinuities is made in an attempt to document their essential characteristics. Results from long-period seismology are inconsistent with the presence of continuous, laterally invariant, isotropic boundaries within the upper mantle at the global scale. At regional scales, two well-defined interfaces termed H (˜60 km depth) and L (˜200 km depth) of continental affinity are identified, with the latter boundary generally exhibiting an anisotropic character. Long-range refraction profiles are frequently characterized by subcontinental mantle that exhibits a complex stratification within the top 200 km. The shallow layering of this package can behave as an imperfect waveguide giving rise to the so-called teleseismic Pn phase, while the L-discontinuity may define its lower base as the culmination of a low velocity zone. High-resolution, seismic reflection profiling provides sufficient detail in a number of cases to document the merging of mantle interfaces into lower continental crust below former collisional sutures and magmatic arcs, thus unambiguously identifying some lithospheric discontinuities with thrust faults and subducted oceanic lithosphere. Collectively, these and other seismic observations point to a continental lithosphere whose internal structure is dominated by a laterally variable, subhorizontal layering. This stratigraphy appears to be more pronounced at shallower lithospheric levels, includes dense, anisotropic layers of order 10 km in thickness, and exhibits horizontal correlation lengths comparable to the lateral dimensions of overlying crustal blocks. A model of craton evolution which relies on shallow subduction as a principal agent of craton stabilization is shown to be broadly compatible with these characteristics. 5. Bucharest urban seismology International Nuclear Information System (INIS) Balan, Stefan Florin; Ritter, Joachim R.R. 2005-01-01 An important project was carried out in Bucharest area by the National Institute of Research-Development for Earth Physics and Collaborative Research Center 461 (CRC 461) Geophysical Institute from the University of Karlsruhe (Germany) in the period October 2003 - August 2004. The project consists of an array of 33 stations, uniformly arranged in the city of Bucharest and in the outskirts (Magurele, Voluntari, Otopeni, Buftea, etc). The station functioned 24 h/day for a period of 10 months. The number of functioning stations had a little variation in time, some of them had to be moved because some sites became improper in time. The sensors used by the stations were of the type: STS - 2, LE - 3D, 4OT, 3ESP and KS2000. The performance of continuous recording was possible by using on each station a hard disk drive of 120 Gb, which gives independence of 3 month. For preventing some accidental electric power stops a rechargeable battery on each station was used . A service was performed for each station every month to avoid accidental stops, which consisted usually of mechanical bumps. All the recorded data by the stations was saved on DVSs, the final number being around 140. This project helped gathering a large number of seismological data for the city of Bucharest and outskirts from seismic events of magnitude of 4, 3, 2 and ambient noise. (authors) Science.gov (United States) Gardine, L.; Tape, C.; West, M. E. 2014-12-01 Despite residing in a state with 75% of North American earthquakes and three of the top 15 ever recorded, most Alaskans have limited knowledge about the science of earthquakes. To many, earthquakes are just part of everyday life, and to others, they are barely noticed until a large event happens, and often ignored even then. Alaskans are rugged, resilient people with both strong independence and tight community bonds. Rural villages in Alaska, most of which are inaccessible by road, are underrepresented in outreach efforts. Their remote locations and difficulty of access make outreach fiscally challenging. Teacher retention and small student bodies limit exposure to science and hinder student success in college. The arrival of EarthScope's Transportable Array, the 50th anniversary of the Great Alaska Earthquake, targeted projects with large outreach components, and increased community interest in earthquake knowledge have provided opportunities to spread information across Alaska. We have found that performing hands-on demonstrations, identifying seismological relevance toward career opportunities in Alaska (such as natural resource exploration), and engaging residents through place-based experience have increased the public's interest and awareness of our active home. 7. The continental lithosphere DEFF Research Database (Denmark) Artemieva, Irina 2009-01-01 The goal of the present study is to extract non-thermal signal from seismic tomography models in order to distinguish compositional variations in the continental lithosphere and to examine if geochemical and petrologic constraints on global-scale compositional variations in the mantle...... are consistent with modern geophysical data. In the lithospheric mantle of the continents, seismic velocity variations of a non-thermal origin (calculated from global Vs seismic tomography data [Grand S.P., 2002. Mantle shear-wave tomography and the fate of subducted slabs. Philosophical Transactions...... and evolution of Precambrian lithosphere: A global study. Journal of Geophysical Research 106, 16387–16414.] show strong correlation with tectono-thermal ages and with regional variations in lithospheric thickness constrained by surface heat flow data and seismic velocities. In agreement with xenolith data... 8. Seismological Constraints on Geodynamics Science.gov (United States) Lomnitz, C. 2004-12-01 Earth is an open thermodynamic system radiating heat energy into space. A transition from geostatic earth models such as PREM to geodynamical models is needed. We discuss possible thermodynamic constraints on the variables that govern the distribution of forces and flows in the deep Earth. In this paper we assume that the temperature distribution is time-invariant, so that all flows vanish at steady state except for the heat flow Jq per unit area (Kuiken, 1994). Superscript 0 will refer to the steady state while x denotes the excited state of the system. We may write σ 0=(J{q}0ṡX{q}0)/T where Xq is the conjugate force corresponding to Jq, and σ is the rate of entropy production per unit volume. Consider now what happens after the occurrence of an earthquake at time t=0 and location (0,0,0). The earthquake introduces a stress drop Δ P(x,y,z) at all points of the system. Response flows are directed along the gradients toward the epicentral area, and the entropy production will increase with time as (Prigogine, 1947) σ x(t)=σ 0+α {1}/(t+β )+α {2}/(t+β )2+etc A seismological constraint on the parameters may be obtained from Omori's empirical relation N(t)=p/(t+q) where N(t) is the number of aftershocks at time t following the main shock. It may be assumed that p/q\\sim\\alpha_{1}/\\beta times a constant. Another useful constraint is the Mexican-hat geometry of the seismic transient as obtained e.g. from InSAR radar interferometry. For strike-slip events such as Landers the distribution of \\DeltaP is quadrantal, and an oval-shaped seismicity gap develops about the epicenter. A weak outer triggering maxiμm is found at a distance of about 17 fault lengths. Such patterns may be extracted from earthquake catalogs by statistical analysis (Lomnitz, 1996). Finally, the energy of the perturbation must be at least equal to the recovery energy. The total energy expended in an aftershock sequence can be found approximately by integrating the local contribution over 9. Bringing Seismology's Grand Challenges to the Undergraduate Classroom Science.gov (United States) Benoit, M. H.; Taber, J.; Hubenthal, M. 2011-12-01 The "Seismological Grand Challenges in Understanding Earth's Dynamic Systems," a community-written long-range science plan for the next decade, poses 10 questions to guide fundamental seismological research. Written in an approachable fashion suitable for policymakers, the broad questions and supporting discussion contained in this document offer an ideal framework for the development of undergraduate curricular materials. Leveraging this document, we have created a collection of inquiry-based classroom modules that utilize authentic data to modernize seismological instruction in 100 and 200 level undergraduate courses. The modules not only introduce undergraduates to the broad questions that the seismological community seeks to answer in the future but also showcase the numerous areas where modern seismological research is actively contributing to our understanding of fundamental Earth processes. To date 6 in-depth explorations that correspond to the Grand Challenges document have been developed. The specific topics for each exploration were selected to showcase modern seismological research while also covering topics commonly included in the curriculum of these introductory classes. Examples of activities that have been created and their corresponding Grand Challenge include: -A guided inquiry that introduces students to episodic tremor and slip and compares the GPS and seismic signatures of ETS with those produced from standard tectonic earthquakes (Grand Challenge "How do faults slip?"). - A laboratory exercise where students engage in b-value mapping of volcanic earthquakes to assess potential eruption hazards (How do magmas ascend and erupt?). - A module that introduce students to glacial earthquakes in Greenland and compares their frequency and spatial distribution to tectonic earthquakes (How do processes in the ocean and atmosphere interact with the solid Earth?). What is the relationship between stress and strain in the lithosphere? - An activity that 10. Updated Reference Model for Heat Generation in the Lithosphere Science.gov (United States) Wipperfurth, S. A.; Sramek, O.; Roskovec, B.; Mantovani, F.; McDonough, W. F. 2017-12-01 Models integrating geophysics and geochemistry allow for characterization of the Earth's heat budget and geochemical evolution. Global lithospheric geophysical models are now constrained by surface and body wave data and are classified into several unique tectonic types. Global lithospheric geochemical models have evolved from petrological characterization of layers to a combination of petrologic and seismic constraints. Because of these advances regarding our knowledge of the lithosphere, it is necessary to create an updated chemical and physical reference model. We are developing a global lithospheric reference model based on LITHO1.0 (segmented into 1°lon x 1°lat x 9-layers) and seismological-geochemical relationships. Uncertainty assignments and correlations are assessed for its physical attributes, including layer thickness, Vp and Vs, and density. This approach yields uncertainties for the masses of the crust and lithospheric mantle. Heat producing element abundances (HPE: U, Th, and K) are ascribed to each volume element. These chemical attributes are based upon the composition of subducting sediment (sediment layers), composition of surface rocks (upper crust), a combination of petrologic and seismic correlations (middle and lower crust), and a compilation of xenolith data (lithospheric mantle). The HPE abundances are correlated within each voxel, but not vertically between layers. Efforts to provide correlation of abundances horizontally between each voxel are discussed. These models are used further to critically evaluate the bulk lithosphere heat production in the continents and the oceans. Cross-checks between our model and results from: 1) heat flux (Artemieva, 2006; Davies, 2013; Cammarano and Guerri, 2017), 2) gravity (Reguzzoni and Sampietro, 2015), and 3) geochemical and petrological models (Rudnick and Gao, 2014; Hacker et al. 2015) are performed. 11. Mathematical treatment of seismologic data International Nuclear Information System (INIS) Gama, C.A.J.V.D. da The principle methods of seismologic data treatment with application in engineering design, emphasizing the need for the utilization of reliable data, appropriate algorithims and rigorous calculations so that correct results and valid conclusions be achieved, are examined. (E.G.) [pt 12. Seismological programs in Costa Rica Science.gov (United States) Montero, W.; Spall, Henry 1983-01-01 At the beginning of the 1970's, a series of programs in seismology were initiated by different Costa Rican institutions, and some of these programs are still in the process of development. The institutions are Insituto Costaricense de Electricidad (ICE)- The Costa Rica Institute of Electricity 13. A Look at the Future of Controlled-Source Seismology Science.gov (United States) Keller, G. R.; Klemperer, S.; Hole, J.; Snelson, C. 2008-12-01 Facilities like EarthScope and IRIS/PASSCAL offer a framework in which to re-assess the role of our highest- resolution geophysical tool, controlled-source seismology. This tool is effective in near surface studies that focus on the upper 100 m of the crust to studies that focus on Moho structure and the lithospheric mantle. IRIS has now existed for over two decades and has transformed the way in which passive-source seismology in particular is carried out. Progress over these two decades has led to major discoveries about continental architecture and evolution through the development of three-dimensional images of the upper mantle and lithosphere. Simultaneously the hydrocarbon exploration industry has mapped increasingly large fractions of our sedimentary basins in three-dimensions and at unprecedented resolution and fidelity. Thanks to the additional instruments in the EarthScope facility, a clear scientific need and opportunity exists to map, at similar resolution, all of the crust - the igneous/metamorphic basement, the non-petroliferous basins that contain the record of continental evolution, and the seismogenic faults and active volcanoes that are the principal natural hazards we face. Controlled-source seismology remains the fundamental technology behind exploration for all fossil fuels and many water resources, and as such is a multi-billion-dollar industry centered in the USA. Academic scientists are leaders in developing the algorithms to process the most advanced industry data, but lack the academic data sets to which to apply this technology. University and government controlled-source seismologists, and their students who will populate the exploration industry, are increasingly divorced from that industry by their reliance on sparse spatial recording of usually only a single-component of the wavefield, generated by even sparser seismic sources. However, if we can find the resources, the technology now exists to provide seismic images of immense 14. Mercury's Lithospheric Magnetization Science.gov (United States) Johnson, C.; Phillips, R. J.; Philpott, L. C.; Al Asad, M.; Plattner, A.; Mast, S.; Kinczyk, M. J.; Prockter, L. M. 2017-12-01 Magnetic field data obtained by the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) spacecraft have been used to demonstrate the presence of lithospheric magnetization on Mercury. Larger amplitude fields resulting from the core dynamo and the strongly time-varying magnetospheric current systems are first estimated and subtracted from the magnetic field data to isolate lithospheric signals with wavelengths less than 500 km. These signals (hereafter referred to as data) are only observed at spacecraft altitudes less than 120 km, and are typically a few to 10 nT in amplitude. We present and compare equivalent source dipole magnetization models for latitudes 35°N to 75°N obtained from two distinct approaches to constrain the distribution and origin of lithospheric magnetization. First, models that fit either the data or the surface field predicted from a regional spherical harmonic representation of the data (see Plattner & Johnson abstract) and that minimize the root mean square (RMS) value of the magnetization are derived. Second, models in which the spatial distribution of magnetization required to fit the data is minimized are derived using the approach of Parker (1991). As seen previously, the largest amplitudes of lithospheric magnetization are concentrated around the Caloris basin. With this exception, across the northern hemisphere there are no overall correlations of magnetization with surface geology, although higher magnetizations are found in regions with darker surfaces. Similarly, there is no systematic correlation of magnetization signatures with crater materials, although there are specific instances of craters with interiors or ejecta that have magnetizations distinct from the surrounding region. For the latter case, we observe no correlation of the occurrence of these signatures with crater degradation state (a proxy for age). At the lowest spacecraft altitudes (source depths less than O(10 km) are unlikely in most regions 15. Seismic structure of the lithosphere beneath NW Namibia: Impact of the Tristan da Cunha mantle plume Science.gov (United States) Yuan, Xiaohui; Heit, Benjamin; Brune, Sascha; Steinberger, Bernhard; Geissler, Wolfram H.; Jokat, Wilfried; Weber, Michael 2017-01-01 Northwestern Namibia, at the landfall of the Walvis Ridge, was affected by the Tristan da Cunha mantle plume during continental rupture between Africa and South America, as evidenced by the presence of the Etendeka continental flood basalts. Here we use data from a passive-source seismological network to investigate the upper mantle structure and to elucidate the Cretaceous mantle plume-lithosphere interaction. Receiver functions reveal an interface associated with a negative velocity contrast within the lithosphere at an average depth of 80 km. We interpret this interface as the relic of the lithosphere-asthenosphere boundary (LAB) formed during the Mesozoic by interaction of the Tristan da Cunha plume head with the pre-existing lithosphere. The velocity contrast might be explained by stagnated and "frozen" melts beneath an intensively depleted and dehydrated peridotitic mantle. The present-day LAB is poorly visible with converted waves, indicating a gradual impedance contrast. Beneath much of the study area, converted phases of the 410 and 660 km mantle transition zone discontinuities arrive 1.5 s earlier than in the landward plume-unaffected continental interior, suggesting high velocities in the upper mantle caused by a thick lithosphere. This indicates that after lithospheric thinning during continental breakup, the lithosphere has increased in thickness during the last 132 Myr. Thermal cooling of the continental lithosphere alone cannot produce the lithospheric thickness required here. We propose that the remnant plume material, which has a higher seismic velocity than the ambient mantle due to melt depletion and dehydration, significantly contributed to the thickening of the mantle lithosphere. 16. Monarch (Danaus plexippus L. Nymphalidae) migration, nectar resources and fire regimes in the Ouachita Mountains of Arkansas Science.gov (United States) D. Craig Rudolph; Charles A. Ely; Richard R. Schaefer; J. Howard Williamson; Ronald E. Thill 2006-01-01 Monarchs (Danaus plexippus) pass through the Ouachita Mountains in large numbers in September and October on their annual migration to overwintering sites in the Transvolcanic Belt of central Mexico. Monarchs are dependent on nectar resources to fuel their migratory movements. In the Ouachita Mountains of west-central Arkansas migrating monarchs... 17. Refining Southern California Geotherms Using Seismologic, Geologic, and Petrologic Constraints Science.gov (United States) Thatcher, W. R.; Chapman, D. S.; Allam, A. A.; Williams, C. F. 2017-12-01 Lithospheric deformation in tectonically active regions depends on the 3D distribution of rheology, which is in turn critically controlled by temperature. Under the auspices of the Southern California Earthquake Center (SCEC) we are developing a 3D Community Thermal Model (CTM) to constrain rheology and so better understand deformation processes within this complex but densely monitored and relatively well-understood region. The San Andreas transform system has sliced southern California into distinct blocks, each with characteristic lithologies, seismic velocities and thermal structures. Guided by the geometry of these blocks we use more than 250 surface heat-flow measurements to define 13 geographically distinct heat flow regions (HFRs). Model geotherms within each HFR are constrained by averages and variances of surface heat flow q0 and the 1D depth distribution of thermal conductivity (k) and radiogenic heat production (A), which are strongly dependent on rock type. Crustal lithologies are not always well known and we turn to seismic imaging for help. We interrogate the SCEC Community Velocity Model (CVM) to determine averages and variances of Vp, Vs and Vp/Vs versus depth within each HFR. We bound (A, k) versus depth by relying on empirical relations between seismic wave speed and rock type and laboratory and modeling methods relating (A, k) to rock type. Many 1D conductive geotherms for each HFR are allowed by the variances in surface heat flow and subsurface (A, k). An additional constraint on the lithosphere temperature field is provided by comparing lithosphere-asthenosphere boundary (LAB) depths identified seismologically with those defined thermally as the depth of onset of partial melting. Receiver function studies in Southern California indicate LAB depths that range from 40 km to 90 km. Shallow LAB depths are correlated with high surface heat flow and deep LAB with low heat flow. The much-restricted families of geotherms that intersect peridotite 18. Web Based Seismological Monitoring (wbsm) Science.gov (United States) Giudicepietro, F.; Meglio, V.; Romano, S. P.; de Cesare, W.; Ventre, G.; Martini, M. Over the last few decades the seismological monitoring systems have dramatically improved tanks to the technological advancements and to the scientific progresses of the seismological studies. The most modern processing systems use the network tech- nologies to realize high quality performances in data transmission and remote controls. Their architecture is designed to favor the real-time signals analysis. This is, usually, realized by adopting a modular structure that allow to easy integrate any new cal- culation algorithm, without affecting the other system functionalities. A further step in the seismic processing systems evolution is the large use of the web based appli- cations. The web technologies can be an useful support for the monitoring activities allowing to automatically publishing the results of signals processing and favoring the remote access to data, software systems and instrumentation. An application of the web technologies to the seismological monitoring has been developed at the "Os- servatorio Vesuviano" monitoring center (INGV) in collaboration with the "Diparti- mento di Informatica e Sistemistica" of the Naples University. A system named Web Based Seismological Monitoring (WBSM) has been developed. Its main objective is to automatically publish the seismic events processing results and to allow displaying, analyzing and downloading seismic data via Internet. WBSM uses the XML tech- nology for hypocentral and picking parameters representation and creates a seismic events data base containing parametric data and wave-forms. In order to give tools for the evaluation of the quality and reliability of the published locations, WBSM also supplies all the quality parameters calculated by the locating program and allow to interactively display the wave-forms and the related parameters. WBSM is a modular system in which the interface function to the data sources is performed by two spe- cific modules so that to make it working in conjunction with a 19. Integration of geothermal data along the Balcones/Ouachita trend, central Texas. Final report Energy Technology Data Exchange (ETDEWEB) Woodruff, C.M. Jr.; Gever, C.; Snyder, Fred R.; Wuerch, David Robert 1983-01-01 This report presents data that address possible controls on warm-water resources. Data are presented on a series of maps, and interpretations appear in the brief text accompanying the maps. It is thought that structural controls provided by the Balcones Fault Zone on the west and by the Luling-Mexia-Talco Fault Zone on the east localize the warm waters. The ultimate controlling attribute is the foundered Ouachita structural belt, which, in turn, has controlled the orientation and magnitude of displacement of the superjacent normal fault systems. This thesis is supported by maps (in pocket) showing the following: distribution of thermal waters measured in wells along the Balcones/Ouachita structural trend showing water temperature in /sup 0/F, total depth of the well measured, water salinity in parts per million, and the geologic formation producing the water; structural contours on the base of the Cretaceous System showing the configuration of the Paleozoic Ouachita basement; structural configuration of the Balcones and Luling Fault Zone, Mexia and Talco Fault Zone, and foreland areas adjacent to the Ouachita Orogen using data from the Buda Limestone, Sligo Formation, and Ellenburger Group; Landsat lineaments and Bouguer gravity contours; and geothermal gradient contours of the Balcones/Ouachita trend based on thermal values from Paleozoic and selected Mesozoic formations. 20. Global teaching of global seismology Science.gov (United States) Stein, S.; Wysession, M. 2005-12-01 Our recent textbook, Introduction to Seismology, Earthquakes, & Earth Structure (Blackwell, 2003) is used in many countries. Part of the reason for this may be our deliberate attempt to write the book for an international audience. This effort appears in several ways. We stress seismology's long tradition of global data interchange. Our brief discussions of the science's history illustrate the contributions of scientists around the world. Perhaps most importantly, our discussions of earthquakes, tectonics, and seismic hazards take a global view. Many examples are from North America, whereas others are from other areas. Our view is that non-North American students should be exposed to North American examples that are type examples, and that North American students should be similarly exposed to examples elsewhere. For example, we illustrate how the Euler vector geometry changes a plate boundary from spreading, to strike-slip, to convergence using both the Pacific-North America boundary from the Gulf of California to Alaska and the Eurasia-Africa boundary from the Azores to the Mediterranean. We illustrate diffuse plate boundary zones using western North America, the Andes, the Himalayas, the Mediterranean, and the East Africa Rift. The subduction zone discussions examine Japan, Tonga, and Chile. We discuss significant earthquakes both in the U.S. and elsewhere, and explore hazard mitigation issues in different contexts. Both comments from foreign colleagues and our experience lecturing overseas indicate that this approach works well. Beyond the specifics of our text, we believe that such a global approach is facilitated by the international traditions of the earth sciences and the world youth culture that gives students worldwide common culture. For example, a video of the scene in New Madrid, Missouri that arose from a nonsensical earthquake prediction in 1990 elicits similar responses from American and European students. 1. Reptile Communities Under Diverse Forest Management in the Ouachita Mountains, Arkansas Science.gov (United States) Paul A. Shipman; Stanley F. Fox; Ronald E. Thill; Joseph P. Phelps; David M. Leslie 2004-01-01 Abstract - From May 1995 to March 1999, we censused reptiles in the Ouachita Mountains, Arkansas, on approximately 60 plots on each of four forested watersheds five times per year, with new plots each year. We found that the least intensively managed watershed had significantly lower per-plot reptile abundances, species richness, and diversity.... 2. The Role of Regional Factors in Structuring Ouachita Mountain Stream Assemblages Science.gov (United States) Lance R. Williams; Christopher M. Taylor; Melvin L. Warren; J. Alan Clingenpeel 2004-01-01 Abstract - We used Basin Area Stream Survey data from the USDA Forest Service, Ouachita National Forest to evaluate the relationship between regional fish and macroinvertebrate assemblages and environmental variability (both natural and anthropogenic). Data were collected for three years (1990-1992) from six hydrologically variable stream systems in... 3. Rock fragment distributions and regolith evolution in the Ouachita Mountains, Arkansas, USA Science.gov (United States) Jonathan D. Phillips; Ken Luckow; Daniel A. Marion; Kristin R. Adams 2005-01-01 Rock fragments in the regolith are a persistent property that reflects the combined influences of geologic controls, erosion, deposition, bioturbation, and weathering. The distribution of rock fragments in regoliths of the Ouachita Mountains, Arkansas, shows that sandstone fragments are common in all layers, even if sandstone is absent in parent material. Shale and... 4. Geothermal potential along the Balcones/Ouachita trend, central Texas: ongoing assessment and selected case studies Energy Technology Data Exchange (ETDEWEB) Woodruff, C.M. Jr.; Macpherson, G.L.; Gever, C.; Caran, S.C.; El Shazly, A.G. 1984-04-01 A synopsis of the geologic conditions along the Balcones/Ouachita trend is presented. The problems in defining low-temperature resources and in recognizing anomalies are addressed. Local geologic and hydrologic conditions are assayed in terms of ambient thermal regimes, and hypotheses are presented for the origin of thermal waters. 5. Soil quality and productivity responses to watershed restoration in the Ouachita mountains of Arkansas, USA Science.gov (United States) John A. Stanturf; Daniel A. Marion; Martin Spetich; Kenneth Luckow; James M. Guldin; Hal O. Liechty; Calvin E. Meier 2000-01-01 The Ouachita Mountains Ecosystem Management Research Project (OEMP) is a large interdisciplinary research project designed to provide the scientific foundation for landscape management at the scale of watersheds. The OEMP has progressed through three phases: developing natural regeneration alternatives to clearcutting and planting; testing of these alternatives at the... 6. Landscape Scale Management in the Ouachita Mountains - Where Operational Practices Meet Research Science.gov (United States) Hunter Speed; Ronald J. Perisho; Samuel Larry; James M. Guldin 1999-01-01 Implementation of ecosystem management on National Forest System lands in the Southern Region requires that the best available science be applied to support forest management practices. On the Ouachita National Forest in Arkansas, personnel from the Jessieville and Winona Ranger Districts and the Southern Research Station have developed working relationships that... 7. Initial Effects of Reproduction Cutting Treatments on Residual Hard Mast Production in the Ouachita Mountains Science.gov (United States) Roger W. Perry; Ronald E. Thill 2003-01-01 We compared indices of total hard mast production (oak and hickory combined) in 20, second-growth, pine-hardwood stands under five treatments to determine the effects of different reproduction treatments on mast production in the Ouachita Mountains. We evaluated mast production in mature unharvested controls and stands under four reproduction cutting methods (single-... 8. Seismology and space-based geodesy Science.gov (United States) Tralli, David M.; Tajima, Fumiko 1993-01-01 The potential of space-based geodetic measurement of crustal deformation in the context of seismology is explored. The achievements of seismological source theory and data analyses, mechanical modeling of fault zone behavior, and advances in space-based geodesy are reviewed, with emphasis on realizable contributions of space-based geodetic measurements specifically to seismology. The fundamental relationships between crustal deformation associated with an earthquake and the geodetically observable data are summarized. The response and spatial and temporal resolution of the geodetic data necessary to understand deformation at various phases of the earthquake cycle is stressed. The use of VLBI, SLR, and GPS measurements for studying global geodynamics properties that can be investigated to some extent with seismic data is discussed. The potential contributions of continuously operating strain monitoring networks and globally distributed geodetic observatories to existing worldwide modern digital seismographic networks are evaluated in reference to mutually addressable problems in seismology, geophysics, and tectonics. 9. Analysis of Lithospheric Stresses Using Satellite Gravimetry: Hypotheses and Applications to North Atlantic Science.gov (United States) Minakov, A.; Medvedev, S. 2017-12-01 Analysis of lithospheric stresses is necessary to gain understanding of the forces that drive plate tectonics and intraplate deformations and the structure and strength of the lithosphere. A major source of lithospheric stresses is believed to be in variations of surface topography and lithospheric density. The traditional approach to stress estimation is based on direct calculations of the Gravitational Potential Energy (GPE), the depth integrated density moment of the lithosphere column. GPE is highly sensitive to density structure which, however, is often poorly constrained. Density structure of the lithosphere may be refined using methods of gravity modeling. However, the resulted density models suffer from non-uniqueness of the inverse problem. An alternative approach is to directly estimate lithospheric stresses (depth integrated) from satellite gravimetry data. Satellite gravity gradient measurements by the ESA GOCE mission ensures a wealth of data for mapping lithospheric stresses if a link between data and stresses or GPE can be established theoretically. The non-uniqueness of interpretation of sources of the gravity signal holds in this case as well. Therefore, the data analysis was tested for the North Atlantic region where reliable additional constraints are supplied by both controlled-source and earthquake seismology. The study involves comparison of three methods of stress modeling: (1) the traditional modeling approach using a thin sheet approximation; (2) the filtered geoid approach; and (3) the direct utilization of the gravity gradient tensor. Whereas the first two approaches (1)-(2) calculate GPE and utilize a computationally expensive finite element mechanical modeling to calculate stresses, the approach (3) uses a much simpler numerical treatment but requires simplifying assumptions that yet to be tested. The modeled orientation of principal stresses and stress magnitudes by each of the three methods are compared with the World Stress Map. 10. Bulgarian National Digital Seismological Network Science.gov (United States) Dimitrova, L.; Solakov, D.; Nikolova, S.; Stoyanov, S.; Simeonova, S.; Zimakov, L. G.; Khaikin, L. 2011-12-01 The Bulgarian National Digital Seismological Network (BNDSN) consists of a National Data Center (NDC), 13 stations equipped with RefTek High Resolution Broadband Seismic Recorders - model DAS 130-01/3, 1 station equipped with Quanterra 680 and broadband sensors and accelerometers. Real-time data transfer from seismic stations to NDC is realized via Virtual Private Network of the Bulgarian Telecommunication Company. The communication interruptions don't cause any data loss at the NDC. The data are backed up in the field station recorder's 4Mb RAM memory and are retransmitted to the NDC immediately after the communication link is re-established. The recorders are equipped with 2 compact flash disks able to save more than 1 month long data. The data from the flash disks can be downloaded remotely using FTP. The data acquisition and processing hardware redundancy at the NDC is achieved by two clustered SUN servers and two Blade Workstations. To secure the acquisition, processing and data storage processes a three layer local network is designed at the NDC. Real-time data acquisition is performed using REFTEK's full duplex error-correction protocol RTPD. Data from the Quanterra recorder and foreign stations are fed into RTPD in real-time via SeisComP/SeedLink protocol. Using SeisComP/SeedLink software the NDC transfers real-time data to INGV-Roma, NEIC-USA, ORFEUS Data Center. Regional real-time data exchange with Romania, Macedonia, Serbia and Greece is established at the NDC also. Data processing is performed by the Seismic Network Data Processor (SNDP) software package running on the both Servers. SNDP includes subsystems: Real-time subsystem (RTS_SNDP) - for signal detection; evaluation of the signal parameters; phase identification and association; source estimation; Seismic analysis subsystem (SAS_SNDP) - for interactive data processing; Early warning subsystem (EWS_SNDP) - based on the first arrived P-phases. The signal detection process is performed by 11. High-performance computing in seismology Energy Technology Data Exchange (ETDEWEB) NONE 1996-09-01 The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved. 12. Lithospheric controls on magma composition along Earth's longest continental hotspot track. Science.gov (United States) Davies, D R; Rawlinson, N; Iaffaldano, G; Campbell, I H 2015-09-24 Hotspots are anomalous regions of volcanism at Earth's surface that show no obvious association with tectonic plate boundaries. Classic examples include the Hawaiian-Emperor chain and the Yellowstone-Snake River Plain province. The majority are believed to form as Earth's tectonic plates move over long-lived mantle plumes: buoyant upwellings that bring hot material from Earth's deep mantle to its surface. It has long been recognized that lithospheric thickness limits the rise height of plumes and, thereby, their minimum melting pressure. It should, therefore, have a controlling influence on the geochemistry of plume-related magmas, although unambiguous evidence of this has, so far, been lacking. Here we integrate observational constraints from surface geology, geochronology, plate-motion reconstructions, geochemistry and seismology to ascertain plume melting depths beneath Earth's longest continental hotspot track, a 2,000-kilometre-long track in eastern Australia that displays a record of volcanic activity between 33 and 9 million years ago, which we call the Cosgrove track. Our analyses highlight a strong correlation between lithospheric thickness and magma composition along this track, with: (1) standard basaltic compositions in regions where lithospheric thickness is less than 110 kilometres; (2) volcanic gaps in regions where lithospheric thickness exceeds 150 kilometres; and (3) low-volume, leucitite-bearing volcanism in regions of intermediate lithospheric thickness. Trace-element concentrations from samples along this track support the notion that these compositional variations result from different degrees of partial melting, which is controlled by the thickness of overlying lithosphere. Our results place the first observational constraints on the sub-continental melting depth of mantle plumes and provide direct evidence that lithospheric thickness has a dominant influence on the volume and chemical composition of plume-derived magmas. 13. Statistical Seismology and Induced Seismicity Science.gov (United States) Tiampo, K. F.; González, P. J.; Kazemian, J. 2014-12-01 While seismicity triggered or induced by natural resources production such as mining or water impoundment in large dams has long been recognized, the recent increase in the unconventional production of oil and gas has been linked to rapid rise in seismicity in many places, including central North America (Ellsworth et al., 2012; Ellsworth, 2013). Worldwide, induced events of M~5 have occurred and, although rare, have resulted in both damage and public concern (Horton, 2012; Keranen et al., 2013). In addition, over the past twenty years, the increase in both number and coverage of seismic stations has resulted in an unprecedented ability to precisely record the magnitude and location of large numbers of small magnitude events. The increase in the number and type of seismic sequences available for detailed study has revealed differences in their statistics that previously difficult to quantify. For example, seismic swarms that produce significant numbers of foreshocks as well as aftershocks have been observed in different tectonic settings, including California, Iceland, and the East Pacific Rise (McGuire et al., 2005; Shearer, 2012; Kazemian et al., 2014). Similarly, smaller events have been observed prior to larger induced events in several occurrences from energy production. The field of statistical seismology has long focused on the question of triggering and the mechanisms responsible (Stein et al., 1992; Hill et al., 1993; Steacy et al., 2005; Parsons, 2005; Main et al., 2006). For example, in most cases the associated stress perturbations are much smaller than the earthquake stress drop, suggesting an inherent sensitivity to relatively small stress changes (Nalbant et al., 2005). Induced seismicity provides the opportunity to investigate triggering and, in particular, the differences between long- and short-range triggering. Here we investigate the statistics of induced seismicity sequences from around the world, including central North America and Spain, and 14. Birth of the Program for Array Seismic Studies of the Continental Lithosphere (PASSCAL) Science.gov (United States) James, D. E.; Sacks, I. S. 2002-05-01 As recently as 1984 institutions doing portable seismology depended upon their own complement of instruments, almost all designed and built in-house, and all of limited recording capability and flexibility. No data standards existed. Around 1980 the National Research Council (NRC) of the National Academy of Sciences (NAS), with National Science Foundation (NSF) support, empanelled a committee to study a major new initiative in Seismic Studies of the Continental Lithosphere (SSCL). The SSCL report in 1983 recommended that substantial numbers (1000 or more) of new generation digital seismographs be acquired for 3-D high resolution imaging of the continental lithosphere. Recommendations of the SSCL committee dovetailed with other NRC/NAS and NSF reports that highlighted imaging of the continental lithosphere as an area of highest priority. For the first time in the history of portable seismology the question asked was "What do seismologists need to do the job right?" A grassroots effort was undertaken to define instrumentation and data standards for a powerful new set of modern seismic research tools to serve the national seismological community. In the spring and fall of 1983 NSF and IASPEI sponsored workshops were convened to develop specifications for the design of a new generation of portable instrumentation. PASSCAL was the outgrowth of these seminal studies and workshops. The first step toward the formal formation of PASSCAL began with an ad-hoc organizing committee, comprised largely of the members of the NAS lithospheric seismology panel, convened by the authors at Carnegie Institution in Washington in November 1983. From that meeting emerged plans and promises of NSF support for an open organizational meeting to be held in January 1984, in Madison, Wisconsin. By the end of the two-day Madison meeting PASSCAL and an official consortium of seismological institutions for portable seismology were realities. Shortly after, PASSCAL merged with the complementary 15. Lithospheric structure of northwest Africa: Insights into the tectonic history and influence of mantle flow on large-scale deformation Science.gov (United States) Miller, Meghan S.; Becker, Thorsten 2014-05-01 Northwest Africa is affected by late stage convergence of Africa with Eurasia, the Canary Island hotspot, and bounded by the Proterozoic-age West African craton. We present seismological evidence from receiver functions and shear-wave splitting along with geodynamic modeling to show how the interactions of these tectonic features resulted in dramatic deformation of the lithosphere. We interpret seismic discontinuities from the receiver functions and find evidence for localized, near vertical-offset deformation of both crust-mantle and lithosphere-asthenosphere interfaces at the flanks of the High Atlas. These offsets coincide with the locations of Jurassic-aged normal faults that have been reactivated during the Cenozoic, further suggesting that inherited, lithospheric-scale zones of weakness were involved in the formation of the Atlas. Another significant step in lithospheric thickness is inferred within the Middle Atlas. Its location corresponds to the source of regional Quaternary alkali volcanism, where the influx of melt induced by the shallow asthenosphere appears restricted to a lithospheric-scale fault on the northern side of the mountain belt. Inferred stretching axes from shear-wave splitting are aligned with the topographic grain in the High Atlas, suggesting along-strike asthenospheric shearing in a mantle channel guided by the lithospheric topography. Isostatic modeling based on our improved lithospheric constraints indicates that lithospheric thinning alone does not explain the anomalous Atlas topography. Instead, an mantle upwelling induced by a hot asthenospheric anomaly appears required, likely guided by the West African craton and perhaps sucked northward by subducted lithosphere beneath the Alboran. This dynamic support scenario for the Atlas also suggests that the timing of uplift is contemporaneous with the recent volcanismin the Middle Atlas. 16. Lithosphere erosion atop mantle plumes Science.gov (United States) Agrusta, R.; Arcay, D.; Tommasi, A. 2012-12-01 Mantle plumes are traditionally proposed to play an important role in lithosphere erosion. Seismic images beneath Hawaii and Cape Verde show a lithosphere-asthenosphere-boundary (LAB) up to 50 km shallower than the surroundings. However, numerical models show that unless the plate is stationary the thermo-mechanical erosion of the lithosphere does not exceed 30 km. We use 2D petrological-thermo-mechanical numerical models based on a finite-difference method on a staggered grid and marker in cell method to study the role of partial melting on the plume-lithosphere interaction. A homogeneous peridotite composition with a Newtonian temperature- and pressure-dependent viscosity is used to simulate both the plate and the convective mantle. A constant velocity, ranging from 5 to 12.5 cm/yr, is imposed at the top of the plate. Plumes are created by imposing a thermal anomaly of 150 to 350 K on a 50 km wide domain at the base of the model (700 km depth); the plate right above the thermal anomaly is 40 Myr old. Partial melting is modeled using batch-melting solidus and liquidus in anhydrous conditions. We model the progressive depletion of peridotite and its effect on partial melting by assuming that the melting degree only strictly increases through time. Melt is accumulated until a porosity threshold is reached and the melt in excess is then extracted. The rheology of the partially molten peridotite is determined using viscous constitutive relationship based on a contiguity model, which enables to take into account the effects of grain-scale melt distribution. Above a threshold of 1%, melt is instantaneously extracted. The density varies as a function of partial melting degree and extraction. Besides, we analyze the kinematics of the plume as it impacts a moving plate, the dynamics of time-dependent small-scale convection (SSC) instabilities developing in the low-viscosity layer formed by spreading of hot plume material at the lithosphere base, and the resulting thermal 17. Multi-site Management Plan Ecoregional Conservation for the Ouachita Ecoregion Arkansas and Oklahoma Science.gov (United States) 2006-08-01 Poultry Disposal Pits. Ark. Water Res. Cen. fact sheet no. 2, n.d. Fausch , Kurt D., et al . Fish Communities as Indicators of Environmental...Degradation. Amer. Fish. Soc. Sym. 8, pp. 123-144; 1990. Ouachita Highlands Ecoregional Assessment – Appendix A 24 Fausch , Kurt D., et al . Regional...Interior Plateau of USEPA ( 2002 ). SOURCES References: DeSelm and Murdock 1993, DeSelm and Webb 1997, Nelson 1985, USFWS 1974, Webb et al . 1997 Last 18. Introduction: seismology and earthquake engineering in Central and South America. Science.gov (United States) Espinosa, A.F. 1983-01-01 Reports the state-of-the-art in seismology and earthquake engineering that is being advanced in Central and South America. Provides basic information on seismological station locations in Latin America and some of the programmes in strong-motion seismology, as well as some of the organizations involved in these activities.-from Author 19. Seismology software: state of the practice Science.gov (United States) Smith, W. Spencer; Zeng, Zheng; Carette, Jacques 2018-05-01 We analyzed the state of practice for software development in the seismology domain by comparing 30 software packages on four aspects: product, implementation, design, and process. We found room for improvement in most seismology software packages. The principal areas of concern include a lack of adequate requirements and design specification documents, a lack of test data to assess reliability, a lack of examples to get new users started, and a lack of technological tools to assist with managing the development process. To assist going forward, we provide recommendations for a document-driven development process that includes a problem statement, development plan, requirement specification, verification and validation (V&V) plan, design specification, code, V&V report, and a user manual. We also provide advice on tool use, including issue tracking, version control, code documentation, and testing tools. 20. Seismology software: state of the practice Science.gov (United States) Smith, W. Spencer; Zeng, Zheng; Carette, Jacques 2018-02-01 We analyzed the state of practice for software development in the seismology domain by comparing 30 software packages on four aspects: product, implementation, design, and process. We found room for improvement in most seismology software packages. The principal areas of concern include a lack of adequate requirements and design specification documents, a lack of test data to assess reliability, a lack of examples to get new users started, and a lack of technological tools to assist with managing the development process. To assist going forward, we provide recommendations for a document-driven development process that includes a problem statement, development plan, requirement specification, verification and validation (V&V) plan, design specification, code, V&V report, and a user manual. We also provide advice on tool use, including issue tracking, version control, code documentation, and testing tools. 1. Strategic decision analysis applied to borehole seismology International Nuclear Information System (INIS) Menke, M.M.; Paulsson, B.N.P. 1994-01-01 Strategic Decision Analysis (SDA) is the evolving body of knowledge on how to achieve high quality in the decision that shapes an organization's future. SDA comprises philosophy, process concepts, methodology, and tools for making good decisions. It specifically incorporates many concepts and tools from economic evaluation and risk analysis. Chevron Petroleum Technology Company (CPTC) has applied SDA to evaluate and prioritize a number of its most important and most uncertain R and D projects, including borehole seismology. Before SDA, there were significant issues and concerns about the value to CPTC of continuing to work on borehole seismology. The SDA process created a cross-functional team of experts to structure and evaluate this project. A credible economic model was developed, discrete risks and continuous uncertainties were assessed, and an extensive sensitivity analysis was performed. The results, even applied to a very restricted drilling program for a few years, were good enough to demonstrate the value of continuing the project. This paper explains the SDA philosophy concepts, and process and demonstrates the methodology and tools using the borehole seismology project example. SDA is useful in the upstream industry not just in the R and D/technology decisions, but also in major exploration and production decisions. Since a major challenge for upstream companies today is to create and realize value, the SDA approach should have a very broad applicability 2. Evaluation of surface-wave waveform modeling for lithosphere velocity structure Science.gov (United States) Chang, Tao-Ming Surface-waveform modeling methods will become standard tools for studying the lithosphere structures because they can place greater constraints on earth structure and because of interest in the three-dimensional earth. The purpose of this study is to begin to learn the applicabilities and limitations of these methods. A surface-waveform inversion method is implemented using generalized seismological data functional theory. The method has been tested using synthetic and real seismic data and show that this method is well suited for teleseismic and regional seismograms. Like other linear inversion problems, this method also requires a good starting model. To ease reliance on good starting models, a global search technique, the genetic algorithm, has been applied to surface waveform modeling. This method can rapidly find good models for explaining surface-wave waveform at regional distance. However, this implementation also reveals that criteria which are widely used in seismological studies are not good enough to indicate the goodness of waveform fit. These two methods with the linear waveform inversion method, and traditional surface wave dispersion inversion method have been applied to a western Texas earthquake to test their abilities. The focal mechanism of the Texas event has been reestimated using a grid search for surface wave spectral amplitudes. A comparison of these four algorithms shows some interesting seismic evidences for lithosphere structure. 3. Stream channel responses and soil loss at off-highway vehicle stream crossings in the Ouachita National Forest Science.gov (United States) Daniel A. Marion; Jonathan D. Phillips; Chad Yocum; Stephanie H. Mehlhope 2014-01-01 This study investigates the geomorphic effects of ford-type stream crossings in an off-highway vehicle (OHV) trail complex in the Ouachita National Forest, Arkansas. At a total of 15 crossing sites, we used a disturbed vs. undisturbed study design to assess soil truncation and an upstream vs. downstream design to assess in-channel effects. The 15 sites ranged from OHV... 4. Small-mammal responses to pine regeneration treatments in the Ouachita Mountains of Arkansas and Oklahoma, USA Science.gov (United States) Roger W. Perry; Ronald E. Thill 2005-01-01 We compared the initial effects of four forest regeneration treatments (single-tree selection, group selection, shelterwood, and clearcut), and unharvested controls (mature, second-growth forest) on relative abundance of small mammals and small-mammal habitat throughout the Ouachita Mountains of western Arkansas and eastern Oklahoma. We compared small-mammal capture... 5. The Insect Guild of White Oak Acorns: Its Effect on Mast Quality in the Ozark and Ouachita National Forests Science.gov (United States) Alex C. Mangini; Roger W. Perry 2004-01-01 Abstract - Hardwood regeneration, especially of oaks, is an essential component of ecosystem management in the Ouachita and Ozark Mountains of Arkansas. In addition, oak mast is an important wildlife food. Several species of insects inhabit and consume acorns. Data on the insect guild inhabiting white oak (Quercus alba L.) acorns... 6. Seasonal effects on ground water chemistry of the Ouachita Mountains. National Uranium Resource Evaluation Program International Nuclear Information System (INIS) Steele, K.F.; Fay, W.M.; Cavendor, P.N. 1982-08-01 Samples from 13 ground water sites (10 springs and 3 wells) in the Ouachita Mountains were collected nine times during a 16-month period. Daily sampling of six sites was carried out over an 11-day period, with rain during this period. Finally, hourly sampling was conducted at a single site over a 7-hour period. The samples were analyzed for pH, conductivity, temperature, total alkalinity, nitrate, ammonia, sulfate, phosphate, chloride, silica, Na, K, Li, Ca, Mg, Sr, Ba, Fe, Mn, Zn, Cu, Co, Ni, Pb, Hg, Br, F, V, Al, Dy, and U. Despite the dry season during late summer, and wet seasons during late spring and late fall in the Ouachita Mountain region, there was no significant change in the ground water chemistry with season. Likewise, there was no significant change due to rain storm events (daily sampling) or hourly sampling. The report is issued in draft form, without detailed technical and copy editing. This was done to make the report available to the public before the end of the National Uranium Resource Evaluation. 9 figures, 19 tables 7. Habitat associations of three crayfish endemic to the Ouachita Mountain Ecoregion Science.gov (United States) Dyer, Joseph J.; Brewer, Shannon K. 2018-01-01 Many crayfish are of conservation concern because of their use of unique habitats and often narrow ranges. In this study, we determined fine-scale habitat use by 3 crayfishes that are endemic to the Ouachita Mountains, in Oklahoma and Arkansas. We sampled Faxonius menae (Mena Crayfish), F. leptogonopodus (Little River Creek Crayfish), and Fallicambarus tenuis (Ouachita Mountain Crayfish) from wet and dry erosional channel units of 29 reaches within the Little River catchment. We compared channel-unit and microhabitat selection for each species. Crayfish of all species and life stages selected erosional channel units more often than depositional units, even though these sites were often dry. Accordingly, crayfish at all life stages typically selected the shallowest available microhabitats. Adult crayfish of all species and juvenile Little River Creek Crayfish selected patches of coarse substrate, and all crayfish tended to use the lowest amount of bedrock available. In general, we showed that these endemic crayfish used erosional channel units of streams, even when the channel units were dry. Conservation efforts that protect erosional channel units and mitigate actions that cause channel downcutting to bedrock would benefit these crayfish, particularly during harsh, summer drying periods. 8. Rotational Seismology Workshop of February 2006 Science.gov (United States) Evans, John R.; Cochard, A.; Graizer, Vladimir; Huang, Bor-Shouh; Hudnut, Kenneth W.; Hutt, Charles R.; Igel, H.; Lee, William H.K.; Liu, Chun-Chi; Majewski, Eugeniusz; Nigbor, Robert; Safak, Erdal; Savage, William U.; Schreiber, U.; Teisseyre, Roman; Trifunac, Mihailo; Wassermann, J.; Wu, Chien-Fu 2007-01-01 Introduction A successful workshop titled 'Measuring the Rotation Effects of Strong Ground Motion' was held simultaneously in Menlo Park and Pasadena via video conference on 16 February 2006. The purpose of the Workshop and this Report are to summarize existing data and theory and to explore future challenges for rotational seismology, including free-field strong motion, structural strong motion, and teleseismic motions. We also forged a consensus on the plan of work to be pursued by this international group in the near term. At this first workshop were 16 participants in Menlo Park, 13 in Pasadena, and a few on the telephone. It was organized by William H. K. Lee and John R. Evans and chaired by William U. Savage in Menlo Park and by Kenneth W. Hudnut in Pasadena. Its agenda is given in the Appendix. This workshop and efforts in Europe led to the creation of the International Working Group on Rotational Seismology (IWGoRS), an international volunteer group providing forums for exchange of ideas and data as well as hosting a series of Workshops and Special Sessions. IWGoRS created a Web site, backed by an FTP site, for distribution of materials related to rotational seismology. At present, the FTP site contains the 2006 Workshop agenda (also given in the Appendix below) and its PowerPoint presentations, as well as many papers (reasonable-only basis with permission of their authors), a comprehensive citations list, and related information. Eventually, the Web site will become the sole authoritative source for IWGoRS and shared information: http://www.rotational-seismology.org ftp://ehzftp.wr.usgs.gov/jrevans/IWGoRS_FTPsite/ With contributions from various authors during and after the 2006 Workshop, this Report proceeds from the theoretical bases for making rotational measurements (Graizer, Safak, Trifunac) through the available observations (Huang, Lee, Liu, Nigbor), proposed suites of measurements (Hudnut), a discussion of broadband teleseismic rotational 9. EPOS Seismology services and their users Science.gov (United States) Haslinger, Florian; Dupont, Aurelien; Michelini, Alberto; Rietbrock, Andreas; Sleeman, Reinoud; Wiemer, Stefan; Basili, Roberto; Bossu, Rémy; Cakti, Eser; Cotton, Fabrice; Crawford, Wayne; Crowley, Helen; Danciu, Laurentiu; Diaz, Jordi; Garth, Tom; Locati, Mario; Luzi, Lucia; Pitilakis, Kyriazis; Roumelioti, Zafeiria; Strollo, Angelo 2017-04-01 The construction of seismological community services for the European Plate Observing System Research Infrastructure (EPOS) is by now well under way. A significant number of services are already operational, largely based on those existing at established institutions or collaborations like ORFEUS, EMSC, AHEAD and EFEHR, and more are being added to be ready for internal validation by late 2017. In this presentation we focus on a number of issues related to the interaction of the community of users with the services provided by the seismological part of the EPOS research infrastructure. How users interact with a service (and how satisfied they are with this interaction) is viewed as one important component of the validation of a service within EPOS, and certainly is key to the uptake of a service and from that also it's attributed value. Within EPOS Seismology, the following aspects of user interaction have already surfaced: - user identification (and potential tracking) versus ease-of-access and openness Requesting users to identify themselves when accessing a service provides various advantages to providers and users (e.g. quantifying & qualifying the service use, customization of services and interfaces, handling access rights and quotas), but may impact the ease of access and also shy away users who don't wish to be identified for whatever reason. - service availability versus cost There is a clear and prominent connection between the availability of a service, both regarding uptime and capacity, and its operational cost (IT systems and personnel), and it is often not clear where to draw the line (and based on which considerations). In connection to that, how to best utilize third-party IT infrastructures (either commercial or public), and what the long-term cost implications of that might be, is equally open. - licensing and attribution The issue of intellectual property and associated licensing policies for data, products and services is only recently gaining 10. EPOS-Seismology: building the Thematic Core Service for Seismology during the EPOS Implementation Phase Science.gov (United States) Haslinger, Florian; EPOS Seismology Consortium, the 2015-04-01 After the successful completion of the EPOS Preparatory Phase, the community of European Research Infrastructures in Seismology is now moving ahead with the build-up of the Thematic Core Service (TCS) for Seismology in EPOS, EPOS-Seismology. Seismology is a domain where European-level infrastructures have been developed since decades, often supported by large-scale EU projects. Today these infrastructures provide services to access earthquake waveforms (ORFEUS), parameters (EMSC) and hazard data and products (EFEHR). The existing organizations constitute the backbone of infrastructures that also in future will continue to manage and host the services of the TCS EPOS-Seismology. While the governance and internal structure of these organizations will remain active, and continue to provide direct interaction with the community, EPOS-Seismology will provide the integration of these within EPOS. The main challenge in the build-up of the TCS EPOS-Seismology is to improve and extend these existing services, producing a single framework which is technically, organizationally and financially integrated with the EPOS architecture, and to further engage various kinds of end users (e.g. scientists, engineers, public managers, citizen scientists). On the technical side the focus lies on four major tasks: - the construction of the next generation software architecture for the European Integrated (waveform) Data Archive EIDA, developing advanced metadata and station information services, fully integrate strong motion waveforms and derived parametric engineering-domain data, and advancing the integration of mobile (temporary) networks and OBS deployments in EIDA; - the further development and expansion of services to access seismological products of scientific interest as provided by the community by implementing a common collection and development (IT) platform, improvements in the earthquake information services e.g. by introducing more robust quality indicators and diversifying 11. Risk and Geodynamically active areas of Carpathian lithosphere Directory of Open Access Journals (Sweden) Lubomil Pospíšil 2007-01-01 Full Text Available This paper illustrates an application of multidisciplinary data analysis to the Carpathian–Pannonian region and presents a verification of a Complex model of the Carpathian - Pannonian lithosphere by recent data sets and geophysical data analyses and its utilization for the determination of risk and active geodynamic and tectonic zones of Ist order . This model can be used for the analysing any Carpathian area from the point of view of the seismic risk, hazards and geodynamic activity, which is important to know for the building of a repository for the radioactive wasted material. Besides the traditionally used geological (sedimentological and volcanological data and geomorphological data (Remote Sensing, an emphasis was laid on geodetic, grav/mag data, seismic, seismological and other geophysical data (magnetotelluric, heat flow, paleomagnetic etc.. All available geonomic (geologic, geodetic, geophysical, geomorphological data were verified and unified on the basis of the same scale and in the Western Carpathians on the Remote Sensing data. The paper concentrates on two problematic areas – the so call “rebounding area” in the Eastern Carpathians and the Raba – Muran - Malcov tectonic systems. 12. WFCatalog: A catalogue for seismological waveform data Science.gov (United States) Trani, Luca; Koymans, Mathijs; Atkinson, Malcolm; Sleeman, Reinoud; Filgueira, Rosa 2017-09-01 This paper reports advances in seismic waveform description and discovery leading to a new seismological service and presents the key steps in its design, implementation and adoption. This service, named WFCatalog, which stands for waveform catalogue, accommodates features of seismological waveform data. Therefore, it meets the need for seismologists to be able to select waveform data based on seismic waveform features as well as sensor geolocations and temporal specifications. We describe the collaborative design methods and the technical solution showing the central role of seismic feature catalogues in framing the technical and operational delivery of the new service. Also, we provide an overview of the complex environment wherein this endeavour is scoped and the related challenges discussed. As multi-disciplinary, multi-organisational and global collaboration is necessary to address today's challenges, canonical representations can provide a focus for collaboration and conceptual tools for agreeing directions. Such collaborations can be fostered and formalised by rallying intellectual effort into the design of novel scientific catalogues and the services that support them. This work offers an example of the benefits generated by involving cross-disciplinary skills (e.g. data and domain expertise) from the early stages of design, and by sustaining the engagement with the target community throughout the delivery and deployment process. 13. Vital Signs: Seismology of Icy Ocean Worlds. Science.gov (United States) Vance, Steven D; Kedar, Sharon; Panning, Mark P; Stähler, Simon C; Bills, Bruce G; Lorenz, Ralph D; Huang, Hsin-Hua; Pike, W T; Castillo, Julie C; Lognonné, Philippe; Tsai, Victor C; Rhoden, Alyssa R 2018-01-01 Ice-covered ocean worlds possess diverse energy sources and associated mechanisms that are capable of driving significant seismic activity, but to date no measurements of their seismic activity have been obtained. Such investigations could reveal the transport properties and radial structures, with possibilities for locating and characterizing trapped liquids that may host life and yielding critical constraints on redox fluxes and thus on habitability. Modeling efforts have examined seismic sources from tectonic fracturing and impacts. Here, we describe other possible seismic sources, their associations with science questions constraining habitability, and the feasibility of implementing such investigations. We argue, by analogy with the Moon, that detectable seismic activity should occur frequently on tidally flexed ocean worlds. Their ices fracture more easily than rocks and dissipate more tidal energy than the worlds also should create less thermal noise due to their greater distance and consequently smaller diurnal temperature variations. They also lack substantial atmospheres (except in the case of Titan) that would create additional noise. Thus, seismic experiments could be less complex and less susceptible to noise than prior or planned planetary seismology investigations of the Moon or Mars. Key Words: Seismology-Redox-Ocean worlds-Europa-Ice-Hydrothermal. Astrobiology 18, 37-53. 14. Lithospheric architecture of the South-Western Alps revealed by multiparameter teleseismic full-waveform inversion Science.gov (United States) Beller, S.; Monteiller, V.; Operto, S.; Nolet, G.; Paul, A.; Zhao, L. 2018-02-01 The Western Alps, although being intensively investigated, remains elusive when it comes to determining its lithospheric structure. New inferences on the latter are important for the understanding of processes and mechanisms of orogeny needed to unravel the dynamic evolution of the Alps. This situation led to the deployment of the CIFALPS temporary experiment, conducted to address the lack of seismological data amenable to high-resolution seismic imaging of the crust and the upper mantle. We perform a 3-D isotropic full-waveform inversion (FWI) of nine teleseismic events recorded by the CIFALPS experiment to infer 3-D models of both density and P- and S-wave velocities of the Alpine lithosphere. Here, by FWI is meant the inversion of the full seismograms including phase and amplitude effects within a time window following the first arrival up to a frequency of 0.2 Hz. We show that the application of the FWI at the lithospheric scale is able to generate images of the lithosphere with unprecedented resolution and can furnish a reliable density model of the upper lithosphere. In the shallowest part of the crust, we retrieve the shape of the fast/dense Ivrea body anomaly and detect the low velocities of the Po and SE France sedimentary basins. The geometry of the Ivrea body as revealed by our density model is consistent with the Bouguer anomaly. A sharp Moho transition is followed from the external part (30 km depth) to the internal part of the Alps (70-80 km depth), giving clear evidence of a continental subduction event during the formation of the Alpine Belt. A low-velocity zone in the lower lithosphere of the S-wave velocity model supports the hypothesis of a slab detachment in the western part of the Alps that is followed by asthenospheric upwelling. The application of FWI to teleseismic data helps to fill the gap of resolution between traditional imaging techniques, and enables integrated interpretations of both upper and lower lithospheric structures. 15. Rifting Thick Lithosphere - Canning Basin, Western Australia Science.gov (United States) Czarnota, Karol; White, Nicky 2016-04-01 The subsidence histories and architecture of most, but not all, rift basins are elegantly explained by extension of ~120 km thick lithosphere followed by thermal re-thickening of the lithospheric mantle to its pre-rift thickness. Although this well-established model underpins most basin analysis, it is unclear whether the model explains the subsidence of rift basins developed over substantially thick lithosphere (as imaged by seismic tomography beneath substantial portions of the continents). The Canning Basin of Western Australia is an example where a rift basin putatively overlies lithosphere ≥180 km thick, imaged using shear wave tomography. Subsidence modelling in this study shows that the entire subsidence history of the account for the observed subsidence, at standard crustal densities, the lithospheric mantle is required to be depleted in density by 50-70 kg m-3, which is in line with estimates derived from modelling rare-earth element concentrations of the ~20 Ma lamproites and global isostatic considerations. Together, these results suggest that thick lithosphere thinned to > 120 km is thermally stable and is not accompanied by post-rift thermal subsidence driven by thermal re-thickening of the lithospheric mantle. Our results show that variations in lithospheric thickness place a fundamental control on basin architecture. The discrepancy between estimates of lithospheric thickness derived from subsidence data for the western Canning Basin and those derived from shear wave tomography suggests that the latter technique currently is limited in its ability to resolve lithospheric thickness variations at horizontal half-wavelength scales of <300 km. 16. Understanding and Observing Subglacial Friction Using Seismology Science.gov (United States) Tsai, V. C. 2017-12-01 Glaciology began with a focus on understanding basic mechanical processes and producing physical models that could explain the principal observations. Recently, however, more attention has been paid to the wealth of recent observations, with many modeling efforts relying on data assimilation and empirical scalings, rather than being based on first-principles physics. Notably, ice sheet models commonly assume that subglacial friction is characterized by a "slipperiness" coefficient that is determined by inverting surface velocity observations. Predictions are usually then made by assuming these slipperiness coefficients are spatially and temporally fixed. However, this is only valid if slipperiness is an unchanging material property of the bed and, despite decades of work on subglacial friction, it has remained unclear how to best account for such subglacial physics in ice sheet models. Here, we describe how basic seismological concepts and observations can be used to improve our understanding and determination of subglacial friction. First, we discuss how standard models of granular friction can and should be used in basal friction laws for marine ice sheets, where very low effective pressures exist. We show that under realistic West Antarctic Ice Sheet conditions, standard Coulomb friction should apply in a relatively narrow zone near the grounding line and that this should transition abruptly as one moves inland to a different, perhaps Weertman-style, dependence of subglacial stress on velocity. We show that this subglacial friction law predicts significantly different ice sheet behavior even as compared with other friction laws that include effective pressure. Secondly, we explain how seismological observations of water flow noise and basal icequakes constrain subglacial physics in important ways. Seismically observed water flow noise can provide constraints on water pressures and channel sizes and geometry, leading to important data on subglacial friction 17. DEFORMATION WAVES AS A TRIGGER MECHANISM OF SEISMIC ACTIVITY IN SEISMIC ZONES OF THE CONTINENTAL LITHOSPHERE Directory of Open Access Journals (Sweden) S. I. Sherman 2013-01-01 Full Text Available Deformation waves as a trigger mechanism of seismic activity and migration of earthquake foci have been under discussion by researchers in seismology and geodynamics for over 50 years. Four sections of this article present available principal data on impacts of wave processes on seismicity and new data. The first section reviews analytical and experimental studies aimed at identification of relationships between wave processes in the lithosphere and seismic activity manifested as space-and-time migration of individual earthquake foci or clusters of earthquakes. It is concluded that with a systematic approach, instead of using a variety of terms to denote waves that trigger seismic process in the lithosphere, it is reasonable to apply the concise definition of ‘deformation waves’, which is most often used in fact.The second section contains a description of deformation waves considered as the trigger mechanism of seismic activity. It is concluded that a variety of methods are applied to identify deformation waves, and such methods are based on various research methods and concepts that naturally differ in sensitivity concerning detection of waves and/or impact of the waves on seismic process. Epicenters of strong earthquakes are grouped into specific linear or arc-shaped systems, which common criterion is the same time interval of the occurrence of events under analysis. On site the systems compose zones with similar time sequences, which correspond to the physical notion of moving waves (Fig. 9. Periods of manifestation of such waves are estimated as millions of years, and a direct consideration of the presence of waves and wave parameters is highly challenging. In the current state-of-the-art, geodynamics and seismology cannot provide any other solution yet.The third section presents a solution considering record of deformation waves in the lithosphere. With account of the fact that all the earthquakes with М≥3.0 are associated with 18. The lithosphere-asthenosphere: Italy and surroundings International Nuclear Information System (INIS) Panza, G.F.; Aoudia, A.; Pontevivo, A.; Chimera, G.; Raykova, R. 2003-02-01 The velocity-depth distribution of the lithosphere-asthenosphere in the Italian region and surroundings is imaged, with a lateral resolution of about 100 km, by surface wave velocity tomography and non-linear inversion. Maps of the Moho depth, of the thickness of the lithosphere and of the shear-wave velocities, down to depths of 200 km and more, are constructed. A mantle wedge, identified in the uppermost mantle along the Apennines and the Calabrian Arc, underlies the principal recent volcanoes, and partial melting can be relevant in this part of the uppermost mantle. In Calabria a lithospheric doubling is seen, in connection with the subduction of the Ionian lithosphere. The asthenosphere is shallow in the Southern Tyrrhenian Sea. High velocity bodies, cutting the asthenosphere, outline the Adria-lonian subduction in the Tyrrhenian Sea and the deep-reaching lithospheric root in the Western Alps. Less deep lithospheric roots are seen in the Central Apennines. The lithosphere-asthenosphere properties delineate a differentiation between the northern and the southern sectors of the Adriatic Sea, likely attesting the fragmentation of Adria. (author) 19. Craton Heterogeneity in the South American Lithosphere Science.gov (United States) Lloyd, S.; Van der Lee, S.; Assumpcao, M.; Feng, M.; Franca, G. S. 2012-04-01 We investigate structure of the lithosphere beneath South America using receiver functions, surface wave dispersion analysis, and seismic tomography. The data used include recordings from 20 temporary broadband seismic stations deployed across eastern Brazil (BLSP02) and from the Chile Ridge Subduction Project seismic array in southern Chile (CRSP). By jointly inverting Moho point constraints, Rayleigh wave group velocities, and regional S and Rayleigh wave forms we obtain a continuous map of Moho depth. The new tomographic Moho map suggests that Moho depth and Moho relief vary slightly with age within the Precambrian crust. Whether or not a correlation between crustal thickness and geologic age can be derived from the pre-interpolation point constraints depends strongly on the selected subset of receiver functions. This implies that using only pre-interpolation point constraints (receiver functions) inadequately samples the spatial variation in geologic age. We also invert for S velocity structure and estimate the depth of the lithosphere-asthenosphere boundary (LAB) in Precambrian South America. The new model reveals a relatively thin lithosphere throughout most of Precambrian South America (< 140 km). Comparing LAB depth with lithospheric age shows they are overall positively correlated, whereby the thickest lithosphere occurs in the relatively small Saõ Francisco craton (200 km). However, within the larger Amazonian craton the younger lithosphere is thicker, indicating that locally even larger cratons are not protected from erosion or reworking of the lithosphere. 20. The lithosphere-asthenosphere Italy and surroundings CERN Document Server Panza, G F; Chimera, G; Pontevivo, A; Raykova, R 2003-01-01 The velocity-depth distribution of the lithosphere-asthenosphere in the Italian region and surroundings is imaged, with a lateral resolution of about 100 km, by surface wave velocity tomography and non-linear inversion. Maps of the Moho depth, of the thickness of the lithosphere and of the shear-wave velocities, down to depths of 200 km and more, are constructed. A mantle wedge, identified in the uppermost mantle along the Apennines and the Calabrian Arc, underlies the principal recent volcanoes, and partial melting can be relevant in this part of the uppermost mantle. In Calabria a lithospheric doubling is seen, in connection with the subduction of the Ionian lithosphere. The asthenosphere is shallow in the Southern Tyrrhenian Sea. High velocity bodies, cutting the asthenosphere, outline the Adria-lonian subduction in the Tyrrhenian Sea and the deep-reaching lithospheric root in the Western Alps. Less deep lithospheric roots are seen in the Central Apennines. The lithosphere-asthenosphere properties delineat... 1. Numerical simulations of the mantle lithosphere delamination Science.gov (United States) Morency, C.; Doin, M.-P. 2004-03-01 Sudden uplift, extension, and increased igneous activity are often explained by rapid mechanical thinning of the lithospheric mantle. Two main thinning mechanisms have been proposed, convective removal of a thickened lithospheric root and delamination of the mantle lithosphere along the Moho. In the latter case, the whole mantle lithosphere peels away from the crust by the propagation of a localized shear zone and sinks into the mantle. To study this mechanism, we perform two-dimensional (2-D) numerical simulations of convection using a viscoplastic rheology with an effective viscosity depending strongly on temperature, depth, composition (crust/mantle), and stress. The simulations develop in four steps. (1) We first obtain "classical" sublithospheric convection for a long time period (˜300 Myr), yielding a slightly heterogeneous lithospheric temperature structure. (2) At some time, in some simulations, a strong thinning of the mantle occurs progressively in a small area (˜100 km wide). This process puts the asthenosphere in direct contact with the lower crust. (3) Large pieces of mantle lithosphere then quickly sink into the mantle by the horizontal propagation of a detachment level away from the "asthenospheric conduit" or by progressive erosion on the flanks of the delaminated area. (4) Delamination pauses or stops when the lithospheric mantle part detaches or when small-scale convection on the flanks of the delaminated area is counterbalanced by heat diffusion. We determine the parameters (crustal thicknesses, activation energies, and friction coefficients) leading to delamination initiation (step 2). We find that delamination initiates where the Moho temperature is the highest, as soon as the crust and mantle viscosities are sufficiently low. Delamination should occur on Earth when the Moho temperature exceeds ˜800°C. This condition can be reached by thermal relaxation in a thickened crust in orogenic setting or by corner flow lithospheric erosion in the 2. Lithospheric Structure, Crustal Kinematics, and Earthquakes in North China: An Integrated Study Science.gov (United States) Liu, M.; Yang, Y.; Sandvol, E.; Chen, Y.; Wang, L.; Zhou, S.; Shen, Z.; Wang, Q. 2007-12-01 The North China block (NCB) is geologically part of the Archaean Sino-Korean craton. But unusual for a craton, it was thermally rejuvenated since late Mesozoic, and experienced widespread extension and volcanism through much of the Cenozoic. Today, the NCB is characterized by strong internal deformation and seismicity, including the 1976 Tangshan earthquake that killed ~250,000 people. We have started a multidisciplinary study to image the lithospheric and upper mantle structure using seismological methods, to delineate crustal kinematics and deformation via studies of neotectonics and space geodesy, and to investigate the driving forces, the stress states and evolution, and seismicity using geodynamic modeling. Both seismic imaging and GPS results indicate that the Ordos plateau, which is the western part of the NCB and a relic of the Sino-Korean craton, has been encroached around its southern margins by mantle flow and thus is experiencing active cratonic destruction. Some of the mantle flow may be driven by the Indo-Asian collision, although the cause of the broad mantle upwelling responsible for the Mesozoic thinning of the NCB lithosphere remains uncertain. At present, crustal deformation in the NCB is largely driven by gravitational spreading of the expanding Tibetan Plateau. Internal deformation within the NCB is further facilitated by the particular tectonic boundary conditions around the NCB, and the large lateral contrasts of lithospheric strength and rheology. Based on the crustal kinematics and lithospheric structure, we have developed a preliminary geodynamic model for stress states and strain energy in the crust of the NCB. The predicted long-term strain energy distribution is comparable with the spatial pattern of seismic energy release in the past 2000 years. We are exploring the cause of the spatiotemporal occurrence of large earthquakes in the NCB, especially the apparent migration of seismicity from the Weihe-Shanxi grabens around the Ordos to 3. Lithospheric low-velocity zones associated with a magmatic segment of the Tanzanian Rift, East Africa Science.gov (United States) Plasman, M.; Tiberi, C.; Ebinger, C.; Gautier, S.; Albaric, J.; Peyrat, S.; Déverchère, J.; Le Gall, B.; Tarits, P.; Roecker, S.; Wambura, F.; Muzuka, A.; Mulibo, G.; Mtelela, K.; Msabi, M.; Kianji, G.; Hautot, S.; Perrot, J.; Gama, R. 2017-07-01 Rifting in a cratonic lithosphere is strongly controlled by several interacting processes including crust/mantle rheology, magmatism, inherited structure and stress regime. In order to better understand how these physical parameters interact, a 2 yr long seismological experiment has been carried out in the North Tanzanian Divergence (NTD), at the southern tip of the eastern magmatic branch of the East African rift, where the southward-propagating continental rift is at its earliest stage. We analyse teleseismic data from 38 broad-band stations ca. 25 km spaced and present here results from their receiver function (RF) analysis. The crustal thickness and Vp/Vs ratio are retrieved over a ca. 200 × 200 km2 area encompassing the South Kenya magmatic rift, the NTD and the Ngorongoro-Kilimanjaro transverse volcanic chain. Cratonic nature of the lithosphere is clearly evinced through thick (up to ca. 40 km) homogeneous crust beneath the rift shoulders. Where rifting is present, Moho rises up to 27 km depth and the crust is strongly layered with clear velocity contrasts in the RF signal. The Vp/Vs ratio reaches its highest values (ca. 1.9) beneath volcanic edifices location and thinner crust, advocating for melting within the crust. We also clearly identify two major low-velocity zones (LVZs) within the NTD, one in the lower crust and the second in the upper part of the mantle. The first one starts at 15-18 km depth and correlates well with recent tomographic models. This LVZ does not always coexist with high Vp/Vs ratio, pleading for a supplementary source of velocity decrease, such as temperature or composition. At a greater depth of ca. 60 km, a mid-lithospheric discontinuity roughly mimics the step-like and symmetrically outward-dipping geometry of the Moho but with a more slanting direction (NE-SW) compared to the NS rift. By comparison with synthetic RF, we estimate the associated velocity reduction to be 8-9 per cent. We relate this interface to melt ponding 4. Impact of lithospheric rheology on surface topography Science.gov (United States) Liao, K.; Becker, T. W. 2017-12-01 The expression of mantle flow such as due to a buoyant plume as surface topography is a classical problem, yet the role of rheological complexities could benefit from further exploration. Here, we investigate the topographic expressions of mantle flow by means of numerical and analytical approaches. In numerical modeling, both conventional, free-slip and more realistic, stress-free boundary conditions are applied. For purely viscous rheology, a high viscosity lithosphere will lead to slight overestimates of topography for certain settings, which can be understood by effectively modified boundary conditions. Under stress-free conditions, numerical and analytical results show that the magnitude of dynamic topography decreases with increasing lithosphere thickness (L) and viscosity (ηL), as L-1 and ηL-3. The wavelength of dynamic topography increases linearly with L and (ηL/ ηM) 1/3. We also explore the time-dependent interactions of a rising plume with the lithosphere. For a layered lithosphere with a decoupling weak lower crust embedded between stronger upper crust and lithospheric mantle, dynamic topography increases with a thinner and weaker lower crust. The dynamic topography saturates when the decoupling viscosity is 3-4 orders lower than the viscosity of upper crust and lithospheric mantle. We further explore the role of visco-elastic and visco-elasto-plastic rheologies. 5. a Collaborative Cyberinfrastructure for Earthquake Seismology Science.gov (United States) Bossu, R.; Roussel, F.; Mazet-Roux, G.; Lefebvre, S.; Steed, R. 2013-12-01 One of the challenges in real time seismology is the prediction of earthquake's impact. It is particularly true for moderate earthquake (around magnitude 6) located close to urbanised areas, where the slightest uncertainty in event location, depth, magnitude estimates, and/or misevaluation of propagation characteristics, site effects and buildings vulnerability can dramatically change impact scenario. The Euro-Med Seismological Centre (EMSC) has developed a cyberinfrastructure to collect observations from eyewitnesses in order to provide in-situ constraints on actual damages. This cyberinfrastructure takes benefit of the natural convergence of earthquake's eyewitnesses on EMSC website (www.emsc-csem.org), the second global earthquake information website within tens of seconds of the occurrence of a felt event. It includes classical crowdsourcing tools such as online questionnaires available in 39 languages, and tools to collect geolocated pics. It also comprises information derived from the real time analysis of the traffic on EMSC website, a method named flashsourcing; In case of a felt earthquake, eyewitnesses reach EMSC website within tens of seconds to find out the cause of the shaking they have just been through. By analysing their geographical origin through their IP address, we automatically detect felt earthquakes and in some cases map the damaged areas through the loss of Internet visitors. We recently implemented a Quake Catcher Network (QCN) server in collaboration with Stanford University and the USGS, to collect ground motion records performed by volunteers and are also involved in a project to detect earthquakes from ground motions sensors from smartphones. Strategies have been developed for several social media (Facebook, Twitter...) not only to distribute earthquake information, but also to engage with the Citizens and optimise data collection. A smartphone application is currently under development. We will present an overview of this 6. ASDF - A Modern Data Format for Seismology Science.gov (United States) Krischer, Lion; Smith, James; Lei, Wenjie; Lefebvre, Matthieu; Ruan, Youyi; Sales de Andrade, Elliot; Podhorszki, Norbert; Bozdag, Ebru; Tromp, Jeroen 2017-04-01 Seismology as a science is driven by observing and understanding data and it is thus vital to make this as easy and accessible as possible. The growing volume of freely available data coupled with ever expanding computational power enables scientists to take on new and bigger problems. This evolution is to some part hindered as existing data formats have not been designed with it in mind. We present ASDF (http://seismic-data.org), the Adaptable Seismic Data Format, a novel, modern, and especially practical data format for all branches of seismology with particular focus on how it is incorporated into seismic full waveform inversion workflows. The format aims to solve five key issues: Efficiency: Fast I/O operations especially in high performance computing environments, especially limiting the total number of files. Data organization: Different types of data are needed for a variety of tasks. This results in ad hoc data organization and formats that are hard to maintain, integrate, reproduce, and exchange. Data exchange: We want to exchange complex and complete data sets. Reproducibility: Oftentimes just not existing but crucial to advance our science. Mining, visualization, and understanding of data: As data volumes grow, more complex, new techniques to query and visualize large datasets are needed. ASDF tackles these by defining a structure on top of HDF5 reusing as many existing standards (QuakeML, StationXML, PROV) as possible. An essential trait of ASDF is that it empowers the construction of completely self-describing data sets including waveform, station, and event data together with non-waveform data and a provenance description of everything. This for example for the first time enables the proper archival and exchange of processed or synthetic waveforms. To aid community adoption we developed mature tools in Python as well as in C and Fortran. Additionally we provide a formal definition of the format, a validation tool, and integration into widely used 7. Solving seismological problems using sgraph program: II-waveform modeling International Nuclear Information System (INIS) Abdelwahed, Mohamed F. 2012-01-01 One of the seismological programs to manipulate seismic data is SGRAPH program. It consists of integrated tools to perform advanced seismological techniques. SGRAPH is considered a new system for maintaining and analyze seismic waveform data in a stand-alone Windows-based application that manipulate a wide range of data formats. SGRAPH was described in detail in the first part of this paper. In this part, I discuss the advanced techniques including in the program and its applications in seismology. Because of the numerous tools included in the program, only SGRAPH is sufficient to perform the basic waveform analysis and to solve advanced seismological problems. In the first part of this paper, the application of the source parameters estimation and hypocentral location was given. Here, I discuss SGRAPH waveform modeling tools. This paper exhibits examples of how to apply the SGRAPH tools to perform waveform modeling for estimating the focal mechanism and crustal structure of local earthquakes. 8. Permeability Barrier Generation in the Martian Lithosphere Science.gov (United States) Schools, Joe; Montési, Laurent 2015-11-01 Permeability barriers develop when a magma produced in the interior of a planet rises into the cooler lithosphere and crystallizes more rapidly than the lithosphere can deform (Sparks and Parmentier, 1991). Crystallization products may then clog the porous network in which melt is propagating, reducing the permeability to almost zero, i.e., forming a permeability barrier. Subsequent melts cannot cross the barrier. Permeability barriers have been useful to explain variations in crustal thickness at mid-ocean ridges on Earth (Magde et al., 1997; Hebert and Montési, 2011; Montési et al., 2011). We explore here under what conditions permeability barriers may form on Mars.We use the MELTS thermodynamic calculator (Ghiorso and Sack, 1995; Ghiorso et al., 2002; Asimow et al., 2004) in conjunction with estimated Martian mantle compositions (Morgan and Anders, 1979; Wänke and Dreibus, 1994; Lodders and Fegley, 1997; Sanloup et al., 1999; Taylor 2013) to model the formation of permeability barriers in the lithosphere of Mars. In order to represent potential past and present conditions of Mars, we vary the lithospheric thickness, mantle potential temperature (heat flux), oxygen fugacity, and water content.Our results show that permeability layers can develop in the thermal boundary layer of the simulated Martian lithosphere if the mantle potential temperature is higher than ~1500°C. The various Martian mantle compositions yield barriers in the same locations, under matching variable conditions. There is no significant difference in barrier location over the range of accepted Martian oxygen fugacity values. Water content is the most significant influence on barrier development as it reduces the temperature of crystallization, allowing melt to rise further into the lithosphere. Our lower temperature and thicker lithosphere model runs, which are likely the most similar to modern Mars, show no permeability barrier generation. Losing the possibility of having a permeability 9. Geophysical Exploration Technologies for the Deep Lithosphere Research: An Education Materials for High School Students Science.gov (United States) Xu, H.; Xu, C.; Luo, S.; Chen, H.; Qin, R. 2012-12-01 The science of Geophysics applies the principles of physics to study of the earth. Geophysical exploration technologies include the earthquake seismology, the seismic reflection and refraction methods, the gravity method, the magnetic method and the magnetotelluric method, which are used to measure the interior material distribution, their structure and the tectonics in the lithosphere of the earth. Part of the research project in SinoProbe-02-06 is to develop suitable education materials for carton movies targeting the high school students and public. The carton movies include five parts. The first part includes the structures of the earth's interior and variation in their physical properties that include density, p-wave, s-wave and so on, which are the fundamentals of the geophysical exploration technologies. The second part includes the seismology that uses the propagation of elastic waves through the earth to study the structure and the material distribution of the earth interior. It can be divided into earthquake seismology and artifice seismics commonly using reflection and refraction. The third part includes the magnetic method. Earth's magnetic field (also known as the geomagnetic field)extends from the Earth's inner core to where it meets the solar wind, a stream of energetic particles emanating from the Sun. The aim of magnetic survey is to investigate subsurface geology on the basis of anomalies in the Earth's magnetic field resulting from the magnetic properties of the underlying rocks. The magnetic method in the lithosphere attempts to use magnetic disturbance to analyse the regional geological structure and the magnetic boundaries of the crust. The fourth part includes the gravity method. A gravity anomaly results from the inhomogeneous distribution of density of the Earth. Usually gravity anomalies contain superposed anomalies from several sources. The long wave length anomalies due to deep density contrasts are called regional anomalies. They are 10. Global thermal models of the lithosphere Science.gov (United States) Cammarano, F.; Guerri, M. 2017-12-01 Unraveling the thermal structure of the outermost shell of our planet is key for understanding its evolution. We obtain temperatures from interpretation of global shear-velocity (VS) models. Long-wavelength thermal structure is well determined by seismic models and only slightly affected by compositional effects and uncertainties in mineral-physics properties. Absolute temperatures and gradients with depth, however, are not well constrained. Adding constraints from petrology, heat-flow observations and thermal evolution of oceanic lithosphere help to better estimate absolute temperatures in the top part of the lithosphere. We produce global thermal models of the lithosphere at different spatial resolution, up to spherical-harmonics degree 24, and provide estimated standard deviations. We provide purely seismic thermal (TS) model and hybrid models where temperatures are corrected with steady-state conductive geotherms on continents and cooling model temperatures on oceanic regions. All relevant physical properties, with the exception of thermal conductivity, are based on a self-consistent thermodynamical modelling approach. Our global thermal models also include density and compressional-wave velocities (VP) as obtained either assuming no lateral variations in composition or a simple reference 3-D compositional structure, which takes into account a chemically depleted continental lithosphere. We found that seismically-derived temperatures in continental lithosphere fit well, overall, with continental geotherms, but a large variation in radiogenic heat is required to reconcile them with heat flow (long wavelength) observations. Oceanic shallow lithosphere below mid-oceanic ridges and young oceans is colder than expected, confirming the possible presence of a dehydration boundary around 80 km depth already suggested in previous studies. The global thermal models should serve as the basis to move at a smaller spatial scale, where additional thermo-chemical variations 11. Facilitate, Collaborate, Educate: the Role of the IRIS Consortium in Supporting National and International Research in Seismology (Invited) Science.gov (United States) Simpson, D. W.; Beck, S. L. 2009-12-01 Over the twenty-five years since its founding in 1984, the IRIS Consortium has contributed in fundamental ways to change the practice and culture of research in seismology in the US and worldwide. From an original founding group of twenty-two U.S. academic institutions, IRIS membership has now grown to 114 U.S. Member Institutions, 20 Educational Affiliates and 103 Foreign Affiliates. With strong support from the National Science Foundation, additional resources provided by other federal agencies, close collaboration with the U.S. Geological Survey and many international partners, the technical resources of the core IRIS programs - the Global Seismographic Network (GSN), the Program for Array Seismic Studies of the Continental Lithosphere (PASSCAL), the Data Management System (DMS) and Education and Outreach - have grown to become a major national and international source of experimental data for research on earthquakes and Earth structure, and a resource to support education and outreach to the public. While the primary operational focus of the Consortium is to develop and maintain facilities for the collection of seismological data for basic research, IRIS has become much more than an instrument facility. It has become a stimulus for collaboration between academic seismological programs and a focus for their interactions with national and international partners. It has helped establish the academic community as a significant contributor to the collection of data and an active participant in global research and monitoring. As a consortium of virtually all of the Earth science research institutions in the US, IRIS has helped coordinate the academic community in the development of new initiatives, such as EarthScope, to strengthen the support for science and argue for the relevance of seismology and its use in hazard mitigation. The early IRIS pioneers had the foresight to carefully define program goals and technical standards for the IRIS facilities that have stood 12. Bringing Seismological Research into the School Setting Science.gov (United States) Pavlis, G. L.; Hamburger, M. W. 2004-12-01 One of the primary goals of educational seismology programs is to bring inquiry-based research to the middle- and high-school classroom setting. Although it is often stated as a long-term goal of science outreach programs, in practice there are many barriers to research in the school setting, among them increasing emphasis on test-oriented training, decreasing interest and participation in science fairs, limited teacher confidence and experience for mentoring research, insufficient student preparedness for research projects, and the short term of university involvement (typically limited to brief one-day encounters). For the past three+ years we have tried to address these issues through a focused outreach program we have called the PEPP Research Fellows Program. This is treated as an honors program in which high school teachers in our group nominate students with interests in science careers. These students are invited to participate in the program, and those who elect to take part participate in a one-day education and training session in the fall. Rather than leave research projects completely open, we direct the students at toward one of two specific, group-oriented projects (in our case, one focusing on local recordings of mining explosions, and a second on teleseismic body-wave analysis), but we encourage them to act as independent researchers and follow topics of interest. The students then work on seismic data from the local educational network or from the IRIS facilities. Following several months of informal interaction with teachers and students (email, web conferencing, etc.), we bring the students and teachers to our university for a weekend research symposium in the spring. Students present their work in oral or poster form and prizes are given for the best papers. Projects range from highly local projects (records of seismic noise at school X) to larger-scale regional projects (analysis of teleseismic P-wave delays at PEPP network stations) From 20 to 13. Tsunami Ionospheric warning and Ionospheric seismology Science.gov (United States) Lognonne, Philippe; Rolland, Lucie; Rakoto, Virgile; Coisson, Pierdavide; Occhipinti, Giovanni; Larmat, Carene; Walwer, Damien; Astafyeva, Elvira; Hebert, Helene; Okal, Emile; Makela, Jonathan 2014-05-01 The last decade demonstrated that seismic waves and tsunamis are coupled to the ionosphere. Observations of Total Electron Content (TEC) and airglow perturbations of unique quality and amplitude were made during the Tohoku, 2011 giant Japan quake, and observations of much lower tsunamis down to a few cm in sea uplift are now routinely done, including for the Kuril 2006, Samoa 2009, Chili 2010, Haida Gwai 2012 tsunamis. This new branch of seismology is now mature enough to tackle the new challenge associated to the inversion of these data, with either the goal to provide from these data maps or profile of the earth surface vertical displacement (and therefore crucial information for tsunami warning system) or inversion, with ground and ionospheric data set, of the various parameters (atmospheric sound speed, viscosity, collision frequencies) controlling the coupling between the surface, lower atmosphere and the ionosphere. We first present the state of the art in the modeling of the tsunami-atmospheric coupling, including in terms of slight perturbation in the tsunami phase and group velocity and dependance of the coupling strength with local time, ocean depth and season. We then show the confrontation of modelled signals with observations. For tsunami, this is made with the different type of measurement having proven ionospheric tsunami detection over the last 5 years (ground and space GPS, Airglow), while we focus on GPS and GOCE observation for seismic waves. These observation systems allowed to track the propagation of the signal from the ground (with GPS and seismometers) to the neutral atmosphere (with infrasound sensors and GOCE drag measurement) to the ionosphere (with GPS TEC and airglow among other ionospheric sounding techniques). Modelling with different techniques (normal modes, spectral element methods, finite differences) are used and shown. While the fits of the waveform are generally very good, we analyse the differences and draw direction of future 14. The Albuquerque Seismological Laboratory Data Quality Analyzer Science.gov (United States) Ringler, A. T.; Hagerty, M.; Holland, J.; Gee, L. S.; Wilson, D. 2013-12-01 The U.S. Geological Survey's Albuquerque Seismological Laboratory (ASL) has several efforts underway to improve data quality at its stations. The Data Quality Analyzer (DQA) is one such development. The DQA is designed to characterize station data quality in a quantitative and automated manner. Station quality is based on the evaluation of various metrics, such as timing quality, noise levels, sensor coherence, and so on. These metrics are aggregated into a measurable grade for each station. The DQA consists of a website, a metric calculator (Seedscan), and a PostgreSQL database. The website allows the user to make requests for various time periods, review specific networks and stations, adjust weighting of the station's grade, and plot metrics as a function of time. The website dynamically loads all station data from a PostgreSQL database. The database is central to the application; it acts as a hub where metric values and limited station descriptions are stored. Data is stored at the level of one sensor's channel per day. The database is populated by Seedscan. Seedscan reads and processes miniSEED data, to generate metric values. Seedscan, written in Java, compares hashes of metadata and data to detect changes and perform subsequent recalculations. This ensures that the metric values are up to date and accurate. Seedscan can be run in a scheduled task or on demand by way of a config file. It will compute metrics specified in its configuration file. While many metrics are currently in development, some are completed and being actively used. These include: availability, timing quality, gap count, deviation from the New Low Noise Model, deviation from a station's noise baseline, inter-sensor coherence, and data-synthetic fits. In all, 20 metrics are planned, but any number could be added. ASL is actively using the DQA on a daily basis for station diagnostics and evaluation. As Seedscan is scheduled to run every night, data quality analysts are able to then use the 15. Recent achievements in real-time computational seismology in Taiwan Science.gov (United States) Lee, S.; Liang, W.; Huang, B. 2012-12-01 Real-time computational seismology is currently possible to be achieved which needs highly connection between seismic database and high performance computing. We have developed a real-time moment tensor monitoring system (RMT) by using continuous BATS records and moment tensor inversion (CMT) technique. The real-time online earthquake simulation service is also ready to open for researchers and public earthquake science education (ROS). Combine RMT with ROS, the earthquake report based on computational seismology can provide within 5 minutes after an earthquake occurred (RMT obtains point source information ROS completes a 3D simulation real-time now. For more information, welcome to visit real-time computational seismology earthquake report webpage (RCS). 16. The lithospheric mantle below southern West Greenland DEFF Research Database (Denmark) Sand, Karina Krarup; Waight, Tod Earle; Pearson, D. Graham 2009-01-01 Geothermobarometry of primarily garnet lherzolitic xenoliths from several localities in southern West Greenland is applied to address the diamond potential, pressure and temperature distribution and the stratigraphy of the subcontinental lithospheric mantle ~600 Ma ago. The samples are from kimbe...... into the reworked Archean North of the Naqssugtoqidian deformation front.... 17. The Lithosphere in Italy: Structure and Seismicity International Nuclear Information System (INIS) Brandmayr, Enrico; Blagoeva Raykova, Reneta; Zuri, Marco; Romanelli, Fabio; Doglioni, Carlo; Panza, Giuliano Francesco 2010-07-01 We propose a structural model for the lithosphere-asthenosphere system for the Italic region by means of the S-wave velocity (V S ) distribution with depth. To obtain the velocity structure the following methods are used in the sequence: frequency-time analysis (FTAN); 2D tomography (plotted on a grid 1 o x 1 o ); non-linear inversion; smoothing optimization method. The 3D V S structure (and its uncertainties) of the study region is assembled as a juxtaposition of the selected representative cellular models. The distribution of seismicity and heat flow is used as an independent constraint for the definition of the crustal and lithospheric thickness. The moment tensor inversion of recent damaging earthquakes which occurred in the Italic region is performed through a powerful non-linear technique and it is related to the different rheologic-mechanic properties of the crust and uppermost mantle. The obtained picture of the lithosphere-asthenosphere system for the Italic region confirms a mantle extremely vertically stratified and laterally strongly heterogeneous. The lateral variability in the mantle is interpreted in terms of subduction zones, slab dehydration, inherited mantle chemical anisotropies, asthenospheric upwellings, and so on. The western Alps and the Dinarides have slabs with low dip, whereas the Apennines show a steeper subduction. No evidence for any type of mantle plume is observed. The asymmetric expansion of the Tyrrhenian Sea, which may be interpreted as related to a relative eastward mantle flow with respect to the overlying lithosphere, is confirmed. (author) 18. The continental lithosphere: a geochemical perspective International Nuclear Information System (INIS) Hawkesworth, C.J.; Person, G.; Turner, S.P.; Calsteren, P. Van; Gallagher, K. 1993-01-01 The lithosphere is the cool strong outler layer of the Earth that is effectively a boundary layer to the convecting interior. The evidence from mantle xenoliths and continental basalts is that the lower continental crust and uppermost mantle are different beneath Archaen and proterozoic areas. Mantle xenoliths from Archaen terrains, principally the Kaapvaal craton in southern Africa, are significantly depleted in Fe and other major elements which are concentrated in basalts. Nd and Os isotope data on inclusions in diamonds and peridoties respectively, indicate that such mantle is as old as the overlying Archaen crust. Since it appears to have been coupled to the overlying crust, and to have been isolated from the homogenising effects of convection for long periods of time, it is inferred to be within the continental lithosphere. The mantle lithosphere beneath Proterozoic and younger areas is less depleted in major elements, and so it is more fertile, less buoyant, and therefore thinner, than the Archaen mantle lithosphere. (author). 136 refs, 14 figs 19. Antarctic Lithosphere Studies: Progress, Problems and Promise Science.gov (United States) Dalziel, I. W. D.; Wilson, T. J. 2017-12-01 In the sixty years since the International Geophysical Year, studies of the Antarctic lithosphere have progressed from basic geological observations and sparse geophysical measurements to continental-scale datasets of radiometric dates, ice thickness, bedrock topography and characteristics, seismic imaging and potential fields. These have been augmented by data from increasingly dense broadband seismic and geodetic networks. The Antarctic lithosphere is known to have been an integral part, indeed a "keystone" of the Pangea ( 250-185Ma) and Gondwanaland ( 540-180 Ma) supercontinents. It is widely believed to have been part of hypothetical earlier supercontinents Rodinia ( 1.0-0.75 Ga) and Columbia (Nuna) ( 2.0-1.5 Ga). Despite the paucity of exposure in East Antarctica, the new potential field datasets have emboldened workers to extrapolate Precambrian geological provinces and structures from neighboring continents into Antarctica. Hence models of the configuration of Columbia and its evolution into Rodinia and Gondwana have been proposed, and rift-flank uplift superimposed on a Proterozoic orogenic root has been hypothesized to explain the Gamburtsev Subglacial Mountains. Mesozoic-Cenozoic rifting has imparted a strong imprint on the West Antarctic lithosphere. Seismic tomographic evidence reveals lateral variation in lithospheric thickness, with the thinnest zones within the West Antarctic rift system and underlying the Amundsen Sea Embayment. Upper mantle low velocity zones are extensive, with a deeper mantle velocity anomaly underlying Marie Byrd Land marking a possible mantle plume. Misfits between crustal motions measured by GPS and GIA model predictions can, in part, be linked with the changes in lithosphere thickness and mantle rheology. Unusually high uplift rates measured by GPS in the Amundsen region can be interpreted as the response of regions with thin lithosphere and weak mantle to late Holocene ice mass loss. Horizontal displacements across the TAM 20. Estimating lithospheric properties at Atla Regio, Venus Science.gov (United States) Phillips, Roger J. 1994-01-01 1. Vertically Integrated Seismological Analysis II : Inference Science.gov (United States) Arora, N. S.; Russell, S.; Sudderth, E. 2009-12-01 accepting such complex moves need not be hand-designed. Instead, they are automatically determined by the underlying probabilistic model, which is in turn calibrated via historical data and scientific knowledge. Consider a small seismic event which generates weak signals at several different stations, which might independently be mistaken for noise. A birth move may nevertheless hypothesize an event jointly explaining these detections. If the corresponding waveform data then aligns with the seismological knowledge encoded in the probabilistic model, the event may be detected even though no single station observes it unambiguously. Alternatively, if a large outlier reading is produced at a single station, moves which instantiate a corresponding (false) event would be rejected because of the absence of plausible detections at other sensors. More broadly, one of the main advantages of our MCMC approach is its consistent handling of the relative uncertainties in different information sources. By avoiding low-level thresholds, we expect to improve accuracy and robustness. At the conference, we will present results quantitatively validating our approach, using ground-truth associations and locations provided either by simulation or human analysts. 2. Introduction: seismology and earthquake engineering in Mexico and Central and South America. Science.gov (United States) Espinosa, A.F. 1982-01-01 The results from seismological studies that are used by the engineering community are just one of the benefits obtained from research aimed at mitigating the earthquake hazard. In this issue of Earthquake Information Bulletin current programs in seismology and earthquake engineering, seismic networks, future plans and some of the cooperative programs with different internation organizations are described by Latin-American seismologists. The article describes the development of seismology in Latin America and the seismological interest of the OAS. -P.N.Chroston 3. Multiscale habitat suitability index models for priority landbirds in the Central Hardwoods and West Gulf Coastal Plain/Ouachitas Bird Conservation Regions Science.gov (United States) John M. Tirpak; D. Todd Jones-Farrand; Frank R., III Thompson; Daniel J. Twedt; William B., III Uihlein 2009-01-01 Habitat Suitability Index (HSI) models were developed to assess habitat quality for 40 priority bird species in the Central Hardwoods and West Gulf Coastal Plain/Ouachitas Bird Conservation Regions. The models incorporated both site and landscape environmental variables from one of six nationally consistent datasets. Potential habitat was first defined from unique... 4. The Kenya rift revisited: insights into lithospheric strength through data-driven 3-D gravity and thermal modelling Science.gov (United States) Sippel, Judith; Meeßen, Christian; Cacace, Mauro; Mechie, James; Fishwick, Stewart; Heine, Christian; Scheck-Wenderoth, Magdalena; Strecker, Manfred R. 2017-01-01 We present three-dimensional (3-D) models that describe the present-day thermal and rheological state of the lithosphere of the greater Kenya rift region aiming at a better understanding of the rift evolution, with a particular focus on plume-lithosphere interactions. The key methodology applied is the 3-D integration of diverse geological and geophysical observations using gravity modelling. Accordingly, the resulting lithospheric-scale 3-D density model is consistent with (i) reviewed descriptions of lithological variations in the sedimentary and volcanic cover, (ii) known trends in crust and mantle seismic velocities as revealed by seismic and seismological data and (iii) the observed gravity field. This data-based model is the first to image a 3-D density configuration of the crystalline crust for the entire region of Kenya and northern Tanzania. An upper and a basal crustal layer are differentiated, each composed of several domains of different average densities. We interpret these domains to trace back to the Precambrian terrane amalgamation associated with the East African Orogeny and to magmatic processes during Mesozoic and Cenozoic rifting phases. In combination with seismic velocities, the densities of these crustal domains indicate compositional differences. The derived lithological trends have been used to parameterise steady-state thermal and rheological models. These models indicate that crustal and mantle temperatures decrease from the Kenya rift in the west to eastern Kenya, while the integrated strength of the lithosphere increases. Thereby, the detailed strength configuration appears strongly controlled by the complex inherited crustal structure, which may have been decisive for the onset, localisation and propagation of rifting. 5. Research and development activities of the seismology section for the period January 1986 to December 1987 International Nuclear Information System (INIS) Basu, T.K.; Murty, G.S. 1988-01-01 This report sumarises the R and D in Seismology during the period from January 1986 to December 1987. Major topics of current study are (1) Forensic Seismology, (2) Seismicity and Seismic Risk estimates, (3) Reservoir induced seismicity and (4) Rockburst monitoring. Considerable effort is devoted to development of seismic data acquisition systems and theoretical aspects of seismology. (author) 6. Seismo-Live: Training in Seismology with Jupyter Notebooks Science.gov (United States) Krischer, Lion; Tape, Carl; Igel, Heiner 2016-04-01 Seismological training tends to occur within the isolation of a particular institution with a limited set of tools (codes, libraries) that are often not transferrable outside. Here, we propose to overcome these limitations with a community-driven library of Jupyter notebooks dedicated to training on any aspect of seismology for purposes of education and outreach, on-site or archived tutorials for codes, classroom instruction, and research. A Jupyter notebook (jupyter.org) is an open-source interactive computational environment that allows combining code execution, rich text, mathematics, and plotting. It can be considered a platform that supports reproducible research, as all inputs and outputs may be stored. Text, external graphics, equations can be handled using Markdown (incl. LaTeX) format. Jupyter notebooks are driven by standard web browsers, can be easily exchanged in text format, or converted to other documents (e.g. PDF, slide shows). They provide an ideal format for practical training in seismology. A pilot-platform was setup with a dedicated server such that the Jupyter notebooks can be run in any browser (PC, notepad, smartphone). We show the functionalities of the Seismo-Live platform with examples from computational seismology, seismic data access and processing using the ObsPy library, seismic inverse problems, and others. The current examples are all using the Python programming language but any free language can be used. Potentially, such community platforms could be integrated with the EPOS-IT infrastructure and extended to other fields of Earth sciences. 7. Lithospheric flexure beneath the Freyja Montes Foredeep, Venus: Constraints on lithospheric thermal gradient and heat flow International Nuclear Information System (INIS) 1990-01-01 Analysis of Venera 15 and 16 radar images and topographic data from the Freyja Montes region on Venus suggest that this mountain belt formed as a result of a sequence of underthrusts of the lithosphere of the North Polar Plains beneath the highlands of Ishtar Terra. The Freyja Montes deformation zone consists, south to north, of a linear orogenic belt, an adjacent plateau, a steep scarp separating the plateau from the North Polar Plains, a linear depression at the base of the scarp, and an outer rise. The topographic profile of the depression and outer rise are remarkably similar to that of a foreland deep and rise formed by the flexure of the underthrusting plate beneath a terrestrial mountain range. The authors test the lithospheric flexure hypothesis and they estimate the effective thickness T e of the elastic lithosphere of the underthrusting portion of the North Polar Plains by fitting individual topographic profiles to deflection curves for a broken elastic plate. The theoretical curves fit the observed topographic profiles to within measurement error for values of flexural rigidity D in the range (0.8-3) x 10 22 N m, equivalent to T e in the range 11-18 km. Under the assumption that the base of the mechanical lithosphere is limited by the creep strength of olivine, the mean lithospheric thermal gradient is 14-23 K/km. That the inferred thermal gradient is similar to the value expected for the global mean gradient on the basis of scaling from Earth provides support for the hypothesis that simple conduction dominates lithospheric heat transport on Venus relative to lithospheric recycling and volcanism 8. Autonomous geodynamics of the Pamir-Tien Shan junction zone from seismology data Science.gov (United States) Lukk, A. A.; Shevchenko, V. I.; Leonova, V. G. 2015-11-01 The geodynamics of the Tajik Depression, the junction zone of the Pamirs and Tien Shan, is typically considered in the context of plate tectonic concept, which implies intense subhorizontal compression of the zone resulting from the subduction of the Indian and Eurasian lithospheric plates. This convergence has been reliably confirmed by the GPS measurements. However, the joint analysis of the geological structure, seismicity, and geodimeter measurements conducted during a few years at the Garm geodynamical testing site of the Schmidt Institute of Physics of the Earth, Russian Academy of Sciences, demonstrates a widening of the Tajik Depression instead of its shortening, as should be expected from the subhorizontal compression predominant in the present-day stress-state of this region. This conclusion, together with the data from the other regions, suggests that, along with the plate tectonic mechanisms, there are also other, local, autonomous drivers that contribute to the tectogenesis of this region. Besides, the probable existence of these autonomous sources within the Tajik Depression directly follows from the seismology data. Among them is the crustal spreading within the depression suggested by the seismotectonic displacements in the focal mechanisms of the earthquakes. These displacements are directed in different azimuths off the axial's most subsided part of the depression at a depth of 20-30 km. Above this region the distribution of seismotectonic deformations (STD) is chaotic. This pattern of deformation is barely accounted for by a simple model of subhorizontal compression of the Earth's crust in the region. In our opinion, these features of the seismotectonic deformation in the crust within the studied part of the Tajik Depression is probably associated with the gain in the volume of the rocks due to the inflow of the additional material, which is supplied from the bottom crust or upper mantle by the deep fluids. This increase in the rock volume 9. Thermal classification of lithospheric discontinuities beneath USArray Science.gov (United States) Hansen, Steven M.; Dueker, Ken; Schmandt, Brandon 2015-12-01 Broadband seismic data from the United States were processed into Ps and Sp receiver function image volumes for the purpose of constraining negative velocity gradients (NVG) at depths between the Moho and 200 km. Moho depth picks from the two independent datasets are in good agreement, however, large discrepancies in NVG picks occur and are attributed to free-surface multiples which obscure deep NVG arrivals in the Ps data. From the Sp data, shallow NVG are found west of the Rockies and in the central US while deep and sporadic NVG are observed beneath the Great Plains and northern Rockies. To aid the interpretation of the observed NVG arrivals, the mantle thermal field is estimated by mapping surface wave tomography velocities to temperature assuming an anelastic olivine model. The distribution of temperature versus NVG depth is bi-modal and displays two distinct thermal populations that are interpreted to represent both the lithosphere-asthenosphere boundary (LAB) and mid-lithosphere discontinuities (MLD). LAB arrivals occur in the western US at 60-85 km and 1200-1400 °C depth suggesting that they manifest partial melt near the base of the thermal plate. MLD arrivals primarily occur at 70-110 km depth and 700-900 °C and we hypothesize that these arrivals are caused by a low-velocity metasomatic layer containing phlogopite resulting from magma crystallization products that accumulate within long-lived thick lithosphere. 10. Fossil plume head beneath the Arabian lithosphere? Science.gov (United States) Stein, Mordechai; Hofmann, Albrecht W. 1992-12-01 Phanerozoic alkali basalts from Israel, which have erupted over the past 200 Ma, have isotopic compositions similar to PREMA ("prevalent mantle") with narrow ranges of initial ɛ Nd(T) = +3.9-+5.9; 87Sr/ 86Sr(T)= 0.70292-0.70334; 206Pb/ 204Pb(T)= 18.88-19.99; 207Pb/ 204Pb(T)= 15.58-15.70; and 208Pb/ 204Pb(T)= 38.42-39.57. Their Nb/U(43 ± 9) and Ce/Pb(26 ± 6) ratios are identical to those of normal oceanic basalts, demonstrating that the basalts are essentially free of crustal contamination. Overall, the basalts are chemically and isotopically indistinguishable from many ordinary plume basalts, but no plume track can be identified. We propose that these and other, similar, magmas from the Arabian plate originated from a "fossilized" head of a mantle plume, which was unable to penetrate the continental lithosphere and was therefore trapped and stored beneath it. The plume head was emplaced some time between the late Proterozoic crust formation and the initiation of the Phanerozoic magmatic cycles. Basalts from rift environments in other continental localities show similar geochemistry to that of the Arabian basalts and their sources may also represent fossil plume heads trapped below the continents. We suggest that plume heads are, in general, characterized by the PREMA isotopic mantle signature, because the original plume sources (which may have HIMU or EM-type composition) have been diluted by overlying mantle material, which has been entrained by the plume heads during ascent. On the Arabian plate, rifting and thinning of the lithosphere caused partial melting of the stored plume, which led to periodic volcanism. In the late Cenozoic, the lithosphere broke up and the Red Sea opened. N-MORB tholeiites are now erupting in the central trough of the Red Sea, where the lithosphere has moved apart and the fossil plume has been exhausted, whereas E-MORBs are erupting in the northern and southern troughs, still tapping the plume reservoir. Fossil plumes, which are 11. Ancient Continental Lithosphere Dislocated Beneath Ocean Basins Along the Mid-Lithosphere Discontinuity: A Hypothesis Science.gov (United States) Wang, Zhensheng; Kusky, Timothy M.; Capitanio, Fabio A. 2017-09-01 The documented occurrence of ancient continental cratonic roots beneath several oceanic basins remains poorly explained by the plate tectonic paradigm. These roots are found beneath some ocean-continent boundaries, on the trailing sides of some continents, extending for hundreds of kilometers or farther into oceanic basins. We postulate that these cratonic roots were left behind during plate motion, by differential shearing along the seismically imaged mid-lithosphere discontinuity (MLD), and then emplaced beneath the ocean-continent boundary. Here we use numerical models of cratons with realistic crustal rheologies drifting at observed plate velocities to support the idea that the mid-lithosphere weak layer fostered the decoupling and offset of the African continent's buoyant cratonic root, which was left behind during Meso-Cenozoic continental drift and emplaced beneath the Atlantic Ocean. We show that in some cratonic areas, the MLD plays a similar role as the lithosphere-asthenosphere boundary for accommodating lateral plate tectonic displacements. 12. Rotational Seismology: AGU Session, Working Group, and Website Science.gov (United States) Lee, William H.K.; Igel, Heiner; Todorovska, Maria I.; Evans, John R. 2007-01-01 Introduction Although effects of rotational motions due to earthquakes have long been observed (e. g., Mallet, 1862), nevertheless Richter (1958, p. 213) stated that: 'Perfectly general motion would also involve rotations about three perpendicular axes, and three more instruments for these. Theory indicates, and observation confirms, that such rotations are negligible.' However, Richter provided no references for this claim. Seismology is based primarily on the observation and modeling of three-component translational ground motions. Nevertheless, theoretical seismologists (e.g., Aki and Richards, 1980, 2002) have argued for decades that the rotational part of ground motions should also be recorded. It is well known that standard seismometers are quite sensitive to rotations and therefore subject to rotation-induced errors. The paucity of observations of rotational motions is mainly the result of a lack, until recently, of affordable rotational sensors of sufficient resolution. Nevertheless, in the past decade, a number of authors have reported direct observations of rotational motions and rotations inferred from rigid-body rotations in short baseline accelerometer arrays, creating a burgeoning library of rotational data. For example, ring laser gyros in Germany and New Zealand have led to the first significant and consistent observations of rotational motions from distant earthquakes (Igel et al., 2005, 2007). A monograph on Earthquake Source Asymmetry, Structural Media and Rotation Effects was published recently as well by Teisseyre et al. (2006). Measurement of rotational motions has implications for: (1) recovering the complete ground-displacement history from seismometer recordings; (2) further constraining earthquake rupture properties; (3) extracting information about subsurface properties; and (4) providing additional ground motion information to earthquake engineers for seismic design. A special session on Rotational Motions in Seismology was convened by H 13. Lithosphere and upper-mantle structure of the southern Baltic Sea estimated from modelling relative sea-level data with glacial isostatic adjustment Science.gov (United States) Steffen, H.; Kaufmann, G.; Lampe, R. 2014-06-01 During the last glacial maximum, a large ice sheet covered Scandinavia, which depressed the earth's surface by several 100 m. In northern central Europe, mass redistribution in the upper mantle led to the development of a peripheral bulge. It has been subsiding since the begin of deglaciation due to the viscoelastic behaviour of the mantle. We analyse relative sea-level (RSL) data of southern Sweden, Denmark, Germany, Poland and Lithuania to determine the lithospheric thickness and radial mantle viscosity structure for distinct regional RSL subsets. We load a 1-D Maxwell-viscoelastic earth model with a global ice-load history model of the last glaciation. We test two commonly used ice histories, RSES from the Australian National University and ICE-5G from the University of Toronto. Our results indicate that the lithospheric thickness varies, depending on the ice model used, between 60 and 160 km. The lowest values are found in the Oslo Graben area and the western German Baltic Sea coast. In between, thickness increases by at least 30 km tracing the Ringkøbing-Fyn High. In Poland and Lithuania, lithospheric thickness reaches up to 160 km. However, the latter values are not well constrained as the confidence regions are large. Upper-mantle viscosity is found to bracket [2-7] × 1020 Pa s when using ICE-5G. Employing RSES much higher values of 2 × 1021 Pa s are obtained for the southern Baltic Sea. Further investigations should evaluate whether this ice-model version and/or the RSL data need revision. We confirm that the lower-mantle viscosity in Fennoscandia can only be poorly resolved. The lithospheric structure inferred from RSES partly supports structural features of regional and global lithosphere models based on thermal or seismological data. While there is agreement in eastern Europe and southwest Sweden, the structure in an area from south of Norway to northern Germany shows large discrepancies for two of the tested lithosphere models. The lithospheric 14. Petrology of Serpentinites and Rodingites in the Oceanic Lithosphere OpenAIRE Klein, Frieder 2009-01-01 Serpentinization, steatitization, and rodingitization are consequences of seawater reaction with lithospheric mantle. These processes take place coevally within the oceanic lithosphere and are related to circulation pathways, lithologic makeup of rocks along the flow path, fluid flux, and temperature. While the boundary conditions are set by the history of magmatic and tectonic accretion of the lithosphere, fluid-rock equilibria determine what reactions take place and where in the system. Pet... 15. Global model for the lithospheric strength and effective elastic thickness OpenAIRE Magdala Tesauro; Mikhail Kaban; S. A. P. L. Cloetingh 2013-01-01 Global distribution of the strength and effective elastic thickness (Te) of the lithosphere are estimated using physical parameters from recent crustal and lithospheric models. For the Te estimation we apply a new approach, which provides a possibility to take into account variations of Young modulus (E) within the lithosphere. In view of the large uncertainties affecting strength estimates, we evaluate global strength and Te distributions for possible end-member ‘hard’ (HRM) and a ‘soft’ (SR... 16. QuakeML - An XML Schema for Seismology Science.gov (United States) Wyss, A.; Schorlemmer, D.; Maraini, S.; Baer, M.; Wiemer, S. 2004-12-01 We propose an extensible format-definition for seismic data (QuakeML). Sharing data and seismic information efficiently is one of the most important issues for research and observational seismology in the future. The eXtensible Markup Language (XML) is playing an increasingly important role in the exchange of a variety of data. Due to its extensible definition capabilities, its wide acceptance and the existing large number of utilities and libraries for XML, a structured representation of various types of seismological data should in our opinion be developed by defining a 'QuakeML' standard. Here we present the QuakeML definitions for parameter databases and further efforts, e.g. a central QuakeML catalog database and a web portal for exchanging codes and stylesheets. 17. Benefits of rotational ground motions for planetary seismology Science.gov (United States) Donner, S.; Joshi, R.; Hadziioannou, C.; Nunn, C.; van Driel, M.; Schmelzbach, C.; Wassermann, J. M.; Igel, H. 2017-12-01 Exploring the internal structure of planetary objects is fundamental to understand the evolution of our solar system. In contrast to Earth, planetary seismology is hampered by the limited number of stations available, often just a single one. Classic seismology is based on the measurement of three components of translational ground motion. Its methods are mainly developed for a larger number of available stations. Therefore, the application of classical seismological methods to other planets is very limited. Here, we show that the additional measurement of three components of rotational ground motion could substantially improve the situation. From sparse or single station networks measuring translational and rotational ground motions it is possible to obtain additional information on structure and source. This includes direct information on local subsurface seismic velocities, separation of seismic phases, propagation direction of seismic energy, crustal scattering properties, as well as moment tensor source parameters for regional sources. The potential of this methodology will be highlighted through synthetic forward and inverse modeling experiments. 18. ObsPy: A Python Toolbox for Seismology Science.gov (United States) Krischer, Lion; Megies, Tobias; Sales de Andrade, Elliott; Barsch, Robert; MacCarthy, Jonathan 2017-04-01 In recent years the Python ecosystem evolved into one of the most powerful and productive scientific environments across disciplines. ObsPy (https://www.obspy.org) is a fully community-driven, open-source project dedicated to providing a bridge for seismology into that ecosystem. It does so by offering Read and write support for essentially every commonly used data format in seismology with a unified interface and automatic format detection. This includes waveform data (MiniSEED, SAC, SEG-Y, Reftek, …) as well as station (SEED, StationXML, …) and event meta information (QuakeML, ZMAP, …). Integrated access to the largest data centers, web services, and real-time data streams (FDSNWS, ArcLink, SeedLink, ...). A powerful signal processing toolbox tuned to the specific needs of seismologists. Utility functionality like travel time calculations with the TauP method, geodetic functions, and data visualizations. ObsPy has been in constant development for more than seven years and is developed and used by scientists around the world with successful applications in all branches of seismology. Additionally it nowadays serves as the foundation for a large number of more specialized packages. This presentation will give a short overview of the capabilities of ObsPy and point out several representative or new use cases. Additionally we will discuss the road ahead as well as the long-term sustainability of open-source scientific software. 19. Hydrogeochemical and stream-sediment reconnaissance, orientation study, Ouachita Mountain area, Arkansas. National Uranium Resource Evaluation Program International Nuclear Information System (INIS) Steele, K.F. 1982-08-01 A hydrogeochemical ground water orientation study was conducted in the multi-mineralized area of the Ouachita Mountains, Arkansas in order to evaluate the usefulness of ground water as a sampling medium for uranium exploration in similar areas. Ninety-three springs and nine wells were sampled in Clark, Garland, Hot Springs, Howard, Montgomery, Pike, Polk, and Sevier Counties. Manganese, barite, celestite, cinnabar, stibnite, copper, lead, and zinc are present. The following parameters were determined: pH, conductivity, alkalinity, U, Br, Cl, F, He, Mn, Na, V, Al, Dy, NO 3 , NH 3 , SO 4 , and PO 4 . The minerals appear to significantly affect the chemistry of the ground water. This report is issued in draft form, without detailed technical and copy editing. This was done to make the report available to the public before the end of the National Uranium Resource Evaluation 20. Global equivalent magnetization of the oceanic lithosphere Science.gov (United States) Dyment, J.; Choi, Y.; Hamoudi, M.; Lesur, V.; Thebault, E. 2015-11-01 As a by-product of the construction of a new World Digital Magnetic Anomaly Map over oceanic areas, we use an original approach based on the global forward modeling of seafloor spreading magnetic anomalies and their comparison to the available marine magnetic data to derive the first map of the equivalent magnetization over the World's ocean. This map reveals consistent patterns related to the age of the oceanic lithosphere, the spreading rate at which it was formed, and the presence of mantle thermal anomalies which affects seafloor spreading and the resulting lithosphere. As for the age, the equivalent magnetization decreases significantly during the first 10-15 Myr after its formation, probably due to the alteration of crustal magnetic minerals under pervasive hydrothermal alteration, then increases regularly between 20 and 70 Ma, reflecting variations in the field strength or source effects such as the acquisition of a secondary magnetization. As for the spreading rate, the equivalent magnetization is twice as strong in areas formed at fast rate than in those formed at slow rate, with a threshold at ∼40 km/Myr, in agreement with an independent global analysis of the amplitude of Anomaly 25. This result, combined with those from the study of the anomalous skewness of marine magnetic anomalies, allows building a unified model for the magnetic structure of normal oceanic lithosphere as a function of spreading rate. Finally, specific areas affected by thermal mantle anomalies at the time of their formation exhibit peculiar equivalent magnetization signatures, such as the cold Australian-Antarctic Discordance, marked by a lower magnetization, and several hotspots, marked by a high magnetization. 1. The extending lithosphere (Arthur Holmes Medal Lecture) Science.gov (United States) Brun, Jean-Pierre 2017-04-01 Extension of the lithosphere gives birth to a wide range of structures, with characteristic widths between 10 and 1000 km, which includes continental rifts, passive margins, oceanic rifts, core complexes, or back-arc basins. Because the rheology of rocks strongly depends on temperature, this variety of extensional structures falls in two broad categories of extending lithospheres according to the initial Moho temperature TM. "Cold extending systems", with TM 750°C and crustal-dominated strength, lead, depending on strain rate, to either wide rifts or metamorphic core complexes. A much less quoted product of extension is the exhumation of high-pressure (HP ) metamorphic rocks occurring in domains of back-arc extension driven by slab rollback (e.g. Aegean; Appennines-Calabrian) or when the subduction upper plate undergoes extension for plate kinematics reasons (e.g. Norwegian Caledonides; Papua New Guinea). In these tectonic environments, well-documented pressure-temperature-time (P - T - t) paths of HP rocks show a two-stage retrogression path whose the first part corresponds to an isothermal large pressure drop ΔP proportional to the maximum pressure Pmax recorded by the rocks. This linear relation between ΔP and Pmax, which likely results from a stress switch between compression and extension at the onset of exhumation, is in fact observed in all HP metamorphism provinces worldwide, suggesting that the exhumation of HP rocks in extension is a general process rather than an uncommon case. In summary, the modes and products of extension are so diverse that, taken all together, they constitute a very versatile natural laboratory to decipher the rheological complexities of the continental lithosphere and their mechanical implications. 2. Trends and opportunities in seismology. [Asilomar, California, January 3--9, 1976 Energy Technology Data Exchange (ETDEWEB) 1977-01-01 Thirty-five experts in the fields of geology, geophysics, and engineering, from academia, government, and industry, were invited to participate in a workshop and address the many problems of national and global concern that require seismological expertise for their solutions. This report reviews the history, accomplishments, and status of seismology; assesses changing trends in seismological research and applications; and recommends future directions in the light of these changes and of the growing needs of society in areas in which seismology can make significant contributions. The first part of the volume discusses areas of opportunity (understanding earthquakes and reducing their hazards; exploration, energy, and resources; understanding the earth and planets) and realizing the benefits (the roles of Federal, state, and local governments, industry, and universities). The second part, Background and Progress, briefly considers each of the following topics: the birth and early growth of seismology, nuclear test monitoring and its scientific ramifications, instrumentation and data processing, geodynamics and plate tectonics, theoretical seismology, structure and composition of the earth, exploration seismology, seismic exploration for minerals, earthquake source mechanism studies, engineering seismology, strong ground motion and related earthquake hazards, volcanoes, tsunamis, planetary seismology, and international aspects of seismology. 26 figures. (RWR) 3. High-temperature peridotites - lithospheric or asthenospheric? International Nuclear Information System (INIS) Hops, J.J.; Gurney, J.J. 1990-01-01 High-temperature peridotites by definition yield equilibration temperatures greater than 1100 degrees C. On the basis of temperature and pressure calculations, these high-temperature peridotites are amongst the deepest samples entrained by kimberlites on route to the surface. Conflicting models proposing either a lithospheric or asthenospheric origin for the high-temperature peridotites have been suggested. A detailed study of these xenoliths from a single locality, the Jagersfontein kimberlite in the Orange Free State, has been completed as a means of resolving this controversy. 10 refs., 2 figs 4. Large earthquake rates from geologic, geodetic, and seismological perspectives Science.gov (United States) Jackson, D. D. 2017-12-01 Earthquake rate and recurrence information comes primarily from geology, geodesy, and seismology. Geology gives the longest temporal perspective, but it reveals only surface deformation, relatable to earthquakes only with many assumptions. Geodesy is also limited to surface observations, but it detects evidence of the processes leading to earthquakes, again subject to important assumptions. Seismology reveals actual earthquakes, but its history is too short to capture important properties of very large ones. Unfortunately, the ranges of these observation types barely overlap, so that integrating them into a consistent picture adequate to infer future prospects requires a great deal of trust. Perhaps the most important boundary is the temporal one at the beginning of the instrumental seismic era, about a century ago. We have virtually no seismological or geodetic information on large earthquakes before then, and little geological information after. Virtually all-modern forecasts of large earthquakes assume some form of equivalence between tectonic- and seismic moment rates as functions of location, time, and magnitude threshold. That assumption links geology, geodesy, and seismology, but it invokes a host of other assumptions and incurs very significant uncertainties. Questions include temporal behavior of seismic and tectonic moment rates; shape of the earthquake magnitude distribution; upper magnitude limit; scaling between rupture length, width, and displacement; depth dependence of stress coupling; value of crustal rigidity; and relation between faults at depth and their surface fault traces, to name just a few. In this report I'll estimate the quantitative implications for estimating large earthquake rate. Global studies like the GEAR1 project suggest that surface deformation from geology and geodesy best show the geography of very large, rare earthquakes in the long term, while seismological observations of small earthquakes best forecasts moderate earthquakes 5. Piecewise delamination of Moroccan lithosphere from beneath the Atlas Mountains Science.gov (United States) Bezada, M. J.; Humphreys, E. D.; Davila, J. M.; Carbonell, R.; Harnafi, M.; Palomeras, I.; Levander, A. 2014-04-01 The elevation of the intracontinental Atlas Mountains of Morocco and surrounding regions requires a mantle component of buoyancy, and there is consensus that this buoyancy results from an abnormally thin lithosphere. Lithospheric delamination under the Atlas Mountains and thermal erosion caused by upwelling mantle have each been suggested as thinning mechanisms. We use seismic tomography to image the upper mantle of Morocco. Our imaging resolves the location and shape of lithospheric cavities and of delaminated lithosphere ˜400 km beneath the Middle Atlas. We propose discontinuous delamination of an intrinsically unstable Atlas lithosphere, enabled by the presence of anomalously hot mantle, as a mechanism for producing the imaged structures. The Atlas lithosphere was made unstable by a combination of tectonic shortening and eclogite loading during Mesozoic rifting and Cenozoic magmatism. The presence of hot mantle sourced from regional upwellings in northern Africa or the Canary Islands enhanced the instability of this lithosphere. Flow around the retreating Alboran slab focused upwelling mantle under the Middle Atlas, which we infer to be the site of the most recent delamination. The Atlas Mountains of Morocco stand as an example of large-scale lithospheric loss in a mildly contractional orogen. 6. Global model for the lithospheric strength and effective elastic thickness NARCIS (Netherlands) Tesauro, M.; Kaban, M.K.; Cloetingh, S.A.P.L. 2013-01-01 Global distribution of the strength and effective elastic thickness (Te) of the lithosphere are estimated using physical parameters from recent crustal and lithospheric models. For the Te estimation we apply a new approach, which provides a possibility to take into account variations of Young 7. A lithospheric perspective on structure and evolution of Precambrian cratons DEFF Research Database (Denmark) Artemieva, Irina 2012-01-01 The purpose of this chapter is to provide a summary of geophysical data on the structure of the stable continental lithosphere and its evolution since the Archean. Here, the term lithosphere is used to define the outer layer of the Earth which includes the crust and uppermost mantle, forms the ro... 8. Sustainable access to data, products, services and software from the European seismological Research Infrastructures: the EPOS TCS Seismology Science.gov (United States) Haslinger, Florian; Dupont, Aurelien; Michelini, Alberto; Rietbrock, Andreas; Sleeman, Reinoud; Wiemer, Stefan; Basili, Roberto; Bossu, Rémy; Cakti, Eser; Cotton, Fabrice; Crawford, Wayne; Diaz, Jordi; Garth, Tom; Locati, Mario; Luzi, Lucia; Pinho, Rui; Pitilakis, Kyriazis; Strollo, Angelo 2016-04-01 Easy, efficient and comprehensive access to data, data products, scientific services and scientific software is a key ingredient in enabling research at the frontiers of science. Organizing this access across the European Research Infrastructures in the field of seismology, so that it best serves user needs, takes advantage of state-of-the-art ICT solutions, provides cross-domain interoperability, and is organizationally and financially sustainable in the long term, is the core challenge of the implementation phase of the Thematic Core Service (TCS) Seismology within the EPOS-IP project. Building upon the existing European-level infrastructures ORFEUS for seismological waveforms, EMSC for seismological products, and EFEHR for seismological hazard and risk information, and implementing a pilot Computational Earth Science service starting from the results of the VERCE project, the work within the EPOS-IP project focuses on improving and extending the existing services, aligning them with global developments, to at the end produce a well coordinated framework that is technically, organizationally, and financially integrated with the EPOS architecture. This framework needs to respect the roles and responsibilities of the underlying national research infrastructures that are the data owners and main providers of data and products, and allow for active input and feedback from the (scientific) user community. At the same time, it needs to remain flexible enough to cope with unavoidable challenges in the availability of resources and dynamics of contributors. The technical work during the next years is organized in four areas: - constructing the next generation software architecture for the European Integrated (waveform) Data Archive EIDA, developing advanced metadata and station information services, fully integrate strong motion waveforms and derived parametric engineering-domain data, and advancing the integration of mobile (temporary) networks and OBS deployments in 9. The westward drift of the lithosphere: A tidal ratchet? Directory of Open Access Journals (Sweden) A. Carcaterra 2018-03-01 Full Text Available Is the westerly rotation of the lithosphere an ephemeral accidental recent phenomenon or is it a stable process of Earth's geodynamics? The reason why the tidal drag has been questioned as the mechanism determining the lithospheric shift relative to the underlying mantle is the apparent too high viscosity of the asthenosphere. However, plate boundaries asymmetries are a robust indication of the ‘westerly’ decoupling of the entire Earth's outer lithospheric shell and new studies support lower viscosities in the low-velocity layer (LVZ atop the asthenosphere. Since the solid Earth tide oscillation is longer in one side relative to the other due to the contemporaneous Moon's revolution, we demonstrate that a non-linear rheological behavior is expected in the lithosphere mantle interplay. This may provide a sort of ratchet favoring lowering of the LVZ viscosity under shear, allowing decoupling in the LVZ and triggering the westerly motion of the lithosphere relative to the mantle. 10. Thirty Years of Innovation in Seismology with the IRIS Consortium Science.gov (United States) Sumy, D. F.; Woodward, R.; Aderhold, K.; Ahern, T. K.; Anderson, K. R.; Busby, R.; Detrick, R. S.; Evers, B.; Frassetto, A.; Hafner, K.; Simpson, D. W.; Sweet, J. R.; Taber, J. 2015-12-01 The United States academic seismology community, through the National Science Foundation (NSF)-funded Incorporated Research Institutions for Seismology (IRIS) Consortium, has promoted and encouraged a rich environment of innovation and experimentation in areas such as seismic instrumentation, data processing and analysis, teaching and curriculum development, and academic science. As the science continually evolves, IRIS helps drive the market for new research tools that enable science by establishing a variety of standards and goals. This has often involved working directly with manufacturers to better define the technology required, co-funding key development work or early production prototypes, and purchasing initial production runs. IRIS activities have helped establish de-facto international standards and impacted the commercial sector in areas such as seismic instrumentation, open-access data management, and professional development. Key institutional practices, conducted and refined over IRIS' thirty-year history of operations, have focused on open-access data availability, full retention of maximum-bandwidth, continuous data, and direct community access to state-of-the-art seismological instrumentation and software. These practices have helped to cultivate and support a thriving commercial ecosystem, and have been a key element in the professional development of multiple generations of seismologists who now work in both industry and academia. Looking toward the future, IRIS is increasing its engagement with industry to better enable bi-directional exchange of techniques and technology, and enhancing the development of tomorrow's workforce. In this presentation, we will illustrate how IRIS has promoted innovations grown out of the academic community and spurred technological advances in both academia and industry. 11. Inge Lehmann’s work materials and seismological epistolary archive Directory of Open Access Journals (Sweden) Erik Hjortenberg 2009-06-01 Full Text Available The Inge Lehmann archive contains thousands of seismological work documents from Inge Lehmann’s private home. For a long time the author thought that the main concern was to keep the documents for posterity. There is now a renewed interest in Inge Lehmann, and some documents were presented in a poster at ESC Potsdam 2004, and the collection of documents were scanned and catalogued 2005-2006 at Storia Geofisica Ambiente in Bologna. Inge Lehmann (1888-1993 is famous for her discovery in 1936 of the earth’s inner core and for work on the upper mantle. A short biography is given. After her retirement in 1953 she worked at home in Denmark, and abroad in USA and in Canada. She took part in the creation of the European Seismological Commission in 1951, and in the creation of the International Seismological Centre in 1964. Inge Lehmann received many awards. Some letters from her early correspondence with Harold Jeffreys are discussed, they show how the inner core was discussed already in 1932. A few of the author’s reminiscences of Inge Lehmann are given. 12. SEIS-PROV: Practical Provenance for Seismological Data Science.gov (United States) Krischer, L.; Smith, J. A.; Tromp, J. 2015-12-01 It is widely recognized that reproducibility is crucial to advance science, but at the same time it is very hard to actually achieve. This results in it being recognized but also mostly ignored by a large fraction of the community. A key ingredient towards full reproducibility is to capture and describe the history of data, an issue known as provenance. We present SEIS-PROV, a practical format and data model to store provenance information for seismological data. In a seismological context, provenance can be seen as information about the processes that generated and modified a particular piece of data. For synthetic waveforms the provenance information describes which solver and settings therein were used to generate it. When looking at processed seismograms, the provenance conveys information about the different time series analysis steps that led to it. Additional uses include the description of derived data types, such as cross-correlations and adjoint sources, enabling their proper storage and exchange. SEIS-PROV is based on W3C PROV (http://www.w3.org/TR/prov-overview/), a standard for generic provenance information. It then applies an additional set of constraints to make it suitable for seismology. We present a definition of the SEIS-PROV format, a way to check if any given file is a valid SEIS-PROV document, and two sample implementations: One in SPECFEM3D GLOBE (https://geodynamics.org/cig/software/specfem3d_globe/) to store the provenance information of synthetic seismograms and another one as part of the ObsPy (http://obspy.org) framework enabling automatic tracking of provenance information during a series of analysis and transformation stages. This, along with tools to visualize and interpret provenance graphs, offers a description of data history that can be readily tracked, stored, and exchanged. 13. Coronal seismology waves and oscillations in stellar coronae CERN Document Server Stepanov, Alexander; Nakariakov, Valery M 2012-01-01 This concise and systematic account of the current state of this new branch of astrophysics presents the theoretical foundations of plasma astrophysics, magneto-hydrodynamics and coronal magnetic structures, taking into account the full range of available observation techniques -- from radio to gamma. The book discusses stellar loops during flare energy releases, MHD waves and oscillations, plasma instabilities and heating and charged particle acceleration. Current trends and developments in MHD seismology of solar and stellar coronal plasma systems are also covered, while recent p 14. The experimental operation of a seismological data centre at Blacknest International Nuclear Information System (INIS) Grover, F.H. 1978-10-01 A short account is given of the development and operation of a unit within Blacknest which acts as a centre for handling data received from overseas seismological array stations and stations in the British Isles and also exchanges data with other centres. The work has been carried out as a long-term experiment to assess the capability of small networks of existing research and development stations to participate in the monitoring of a possible future Comprehensive Test Ban treaty (CTB) and to gain experience of the operational requirements for Data Centres. A preliminary assessment of a UK National Technical Means (NTM) for verifying a CTB is obtained inter alia. (author) 15. Density heterogeneity of the cratonic lithosphere DEFF Research Database (Denmark) Cherepanova, Yulia; Artemieva, Irina 2015-01-01 Using free-board modeling, we examine a vertically-averaged mantle density beneath the Archean-Proterozoic Siberian craton in the layer from the Moho down to base of the chemical boundary layer (CBL). Two models are tested: in Model 1 the base of the CBL coincides with the LAB, whereas in Model 2...... the base of the CBL is at a 180 km depth. The uncertainty of density model is density structure of the Siberian lithospheric mantle with a strong...... correlation between mantle density variations and the tectonic setting. Three types of cratonic mantle are recognized from mantle density anomalies. 'Pristine' cratonic regions not sampled by kimberlites have the strongest depletion with density deficit of 1.8-3.0% (and SPT density of 3.29-3.33 t/m3... 16. seismo-live: Training in Seismology using Jupyter Notebooks Science.gov (United States) Igel, Heiner; Krischer, Lion; van Driel, Martin; Tape, Carl 2017-04-01 Practical training in computational methodologies is still underrepresented in Earth science curriculae despite the increasing use of sometimes highly sophisticated simulation and data processing technologies in research projects. At the same time well-engineered community codes make it easy to return results yet with the danger that the inherent traps of black-box solutions are not well understood. For this purpose we have initiated a community platform (www.seismo-live.org) where Python-based Jupyter notebooks can be accessed and run without necessary downloads or local software installations. The increasingly popular Jupyter notebooks allow combining markup language, graphics, equations, with interactive, executable python codes. The platform already includes general Python training, introduction to the ObsPy library for seismology as well as seismic data processing, noise analysis, and a variety of forward solvers for seismic wave propagation. In addition, an example is shown how Jupyter notebooks can be used to increase reproducibility of published results. Submission of Jupyter notebooks for general seismology are encouraged. The platform can be used for complementary teaching in Earth Science courses on compute-intensive research areas. We present recent developments and new features. 17. seismo-live: Training in Computational Seismology using Jupyter Notebooks Science.gov (United States) Igel, H.; Krischer, L.; van Driel, M.; Tape, C. 2016-12-01 Practical training in computational methodologies is still underrepresented in Earth science curriculae despite the increasing use of sometimes highly sophisticated simulation technologies in research projects. At the same time well-engineered community codes make it easy to return simulation-based results yet with the danger that the inherent traps of numerical solutions are not well understood. It is our belief that training with highly simplified numerical solutions (here to the equations describing elastic wave propagation) with carefully chosen elementary ingredients of simulation technologies (e.g., finite-differencing, function interpolation, spectral derivatives, numerical integration) could substantially improve this situation. For this purpose we have initiated a community platform (www.seismo-live.org) where Python-based Jupyter notebooks can be accessed and run without and necessary downloads or local software installations. The increasingly popular Jupyter notebooks allow combining markup language, graphics, equations with interactive, executable python codes. We demonstrate the potential with training notebooks for the finite-difference method, pseudospectral methods, finite/spectral element methods, the finite-volume and the discontinuous Galerkin method. The platform already includes general Python training, introduction to the ObsPy library for seismology as well as seismic data processing and noise analysis. Submission of Jupyter notebooks for general seismology are encouraged. The platform can be used for complementary teaching in Earth Science courses on compute-intensive research areas. 18. Seismologically determined bedload flux during the typhoon season. Science.gov (United States) Chao, Wei-An; Wu, Yih-Min; Zhao, Li; Tsai, Victor C; Chen, Chi-Hsuan 2015-02-05 Continuous seismic records near river channels can be used to quantify the energy induced by river sediment transport. During the 2011 typhoon season, we deployed a seismic array along the Chishan River in the mountain area of southern Taiwan, where there is strong variability in water discharge and high sedimentation rates. We observe hysteresis in the high-frequency (5-15 Hz) seismic noise level relative to the associated hydrological parameters. In addition, our seismic noise analysis reveals an asymmetry and a high coherence in noise cross-correlation functions for several station pairs during the typhoon passage, which corresponds to sediment particles and turbulent flows impacting along the riverbed where the river bends sharply. Based on spectral characteristics of the seismic records, we also detected 20 landslide/debris flow events, which we use to estimate the sediment supply. Comparison of sediment flux between seismologically determined bedload and derived suspended load indicates temporal changes in the sediment flux ratio, which imply a complex transition process from the bedload regime to the suspension regime between typhoon passage and off-typhoon periods. Our study demonstrates the possibility of seismologically monitoring river bedload transport, thus providing valuable additional information for studying fluvial bedrock erosion and mountain landscape evolution. 19. 10 CFR 72.102 - Geological and seismological characteristics for applications before October 16, 2003 and... Science.gov (United States) 2010-01-01 ... 10 Energy 2 2010-01-01 2010-01-01 false Geological and seismological characteristics for... WASTE Siting Evaluation Factors § 72.102 Geological and seismological characteristics for applications..., sites will be acceptable if the results from onsite foundation and geological investigation, literature... 20. New developments in high resolution borehole seismology and their applications to reservoir development and management Energy Technology Data Exchange (ETDEWEB) Paulsson, B.N.P. [Chevron Petroleum Technology Company, La Habra, CA (United States) 1997-08-01 Single-well seismology, Reverse Vertical Seismic Profiles (VSPs) and Crosswell seismology are three new seismic techniques that we jointly refer to as borehole seismology. Borehole seismic techniques are of great interest because they can obtain much higher resolution images of oil and gas reservoirs than what is obtainable with currently used seismic techniques. The quality of oil and gas reservoir management decisions depend on the knowledge of both the large and the fine scale features in the reservoirs. Borehole seismology is capable of mapping reservoirs with an order of magnitude improvement in resolution compared with currently used technology. In borehole seismology we use a high frequency seismic source in an oil or gas well and record the signal in the same well, in other wells, or on the surface of the earth. 1. DESTRUCTION OF THE LITHOSPHERE: FAULTBLOCK DIVISIBILITY AND ITS TECTONOPHYSICAL REGULARITIES Directory of Open Access Journals (Sweden) Semen I. Sherman 2012-01-01 Full Text Available A new concept is proposed concerning the origin and inception of ‘initial’ faults and formation of large blocks as a result of cooling of the Archaean lithosphere, during which Benard cells had formed (Fig. 5. At locations where cooling convection currents went down, partial crystallization took place, stresses were localized, and initial fault occurred there. The systems of such fault developed mainly in two directions and gradually formed an initial block pattern of the lithosphere. This pattern is now represented by the largest Archaean faults acting as boundaries of the lithospheric plates and large intraplate blocks (Fig. 6. This group of faults represents the first scaletime level of destruction of the lithosphere. Large blocks of the first (and may be the second order, which are located on the viscous foundation, interacted with each other under the influence of the sublithospheric movements or endogenous sources and thus facilitated the occurrence of high stresses inside the blocks. When the limits of strength characteristics of the block medium were exceeded, the intrablock stresses were released and caused formation of fractures/faults and blocks of various ranks (Fig. 14. This large group, including faultblock structures of various ranks and ages, comprises the second level of the scaletime destruction of the lithosphere.The intense evolution of ensembles of faults and blocks of the second scaletime level is facilitated by shortterm activation of faultblock structures of the lithosphere under the influence of strain waves. Periods of intensive shortterm activation are reliably detected by seismic monitoring over the past fifty years. Investigations of periodical processes specified in the geological records over the post-Proterozoic periods [Khain, Khalilov, 2009] suggest that in so far uninvestigated historical and more ancient times, the top of the lithosphere was subject to wave processes that 2. Evidence for multiphase folding of the central Indian Ocean lithosphere Digital Repository Service at National Institute of Oceanography (India) Krishna, K.S.; Bull, J.M.; Scrutton, R.A. Long-wavelength (100-300 km) folding in the central Indian Ocean associated with the diffuse plate boundary separating the Indian, Australian, and Capricorn plates is Earth's most convincing example of organized large-scale lithospheric deformation... 3. Lithospheric Strength Beneath the Zagros Mountains of Southwestern Iran Science.gov (United States) 2006-05-01 The Zagros Mountain Belt of southwestern Iran is among the most seismically active mountain belts in the world. Early seismic studies of this area found that the lithosphere underlying the Zagros Mountains follows the "jelly sandwich" model, having a strong upper crust and a strong lithospheric mantle, separated by a weak lower crust. More recent studies, which analyzed earthquakes originating within the Zagros Mountains that were recorded at teleseismic distances, however, found that these earthquakes occurred only within the upper crust, thus indicating that the strength of the Zagros Mountains' lithosphere lies only within the upper crust, in accordance with the "creme brulee" lithospheric model. Preliminary analysis of regionally recorded earthquakes that originated within the Zagros Mountains is presented here. Using earthquakes recorded at regional distances will allow the analysis of a larger dataset than has been used in previous studies. Preliminary results show earthquakes occurring throughout the crust and possibly extending into the upper mantle. 4. Global strength and elastic thickness of the lithosphere NARCIS (Netherlands) Tesauro, M.; Kaban, M.K.; Cloetingh, S.A.P.L. 2012-01-01 Thestrengthand effective elasticthickness (Te) ofthelithosphere control its response to tectonic and surface processes. Here, we present the first globalstrengthand effective elasticthickness maps, which are determined using physical properties from recent crustal and lithospheric models. Pronounced 5. Effects and Non-effects of Stream Drying on Stonefly(Plecoptera) Assemblages in two Ouachita Mountains,AR, Catchments Science.gov (United States) Sheldon, A. L.; Warren, M. L. 2005-05-01 Streams integrate landscape change. To establish baseline conditions and predictive relationships in two experimental catchments, we collected adult stoneflies at 38 sites for a year. We used a stratified random sampling design and regular collections of adults, which are identifiable to species level, to ensure thorough coverage. We collected 43 species (1-27 per site). We characterized sites by two descriptors: stream size as drainage AREA, and DRY, a time-weighted average of absence of surface water in measured sections. Sites ranged from continuous surface flow to partial or total drying for months. Species composition (NMS ordination) was influenced strongly by DRY. Richness of species and genera were well described (R2>85%) by multiple regressions on AREA and DRY. However, species richness was related strongly to AREA (P0.45). Generic richness, in contrast, was related significantly(P<0.001)to both descriptors but the negative effect of DRY was stronger. Seasonal drying is common in the Ouachita region and part of the fauna is resistant to drying. Our results have implications for diversity-stress relationships and taxonomic resolution in community ecology and monitoring. 6. Using a Web Site to Support a Seismology Course Textbook Science.gov (United States) Wysession, M. E.; Stein, S. 2004-12-01 We present a course in seismology that consists of a textbook with an accompanying web site (http://epscx.wustl.edu/seismology/book). The web site serves many different functions, and is of great importance as a companion to the curriculum in several different ways: (1) All of the more than 600 figures from the book are available on the web site. Geophysics is a very visually-oriented discipline, and many concepts are more easily taught with appropriate visual tools. In addition, many instructors are now using computer-based lecture programs such as PowerPoint. To aid in this, all of the figures are displayed in a common JPG format, both with and without titles. They are available to be used in a seismology course, or any kind of Earth Science course. This way, an instructor can easily grab a figure from the web site and drop it into a PowerPoint format. The figures are listed by number, but are also obtainable from menus of thumbnail sketches. If an instructor would like all of the figures, they can be obtained as large zip files, which can be unzipped after downloading. In addition, sample PowerPoint lectures using the figures as well the equations from the text will be available on the course web site. (2) Solutions to all of the homework problems are available in PDF format on the course website. Homework is a vital component of any quantitative course, but it is often a significant time commitment for instructors to derive all of the homework problems. In addition, it is much easier to select which homework problems are desired to be assigned if the solutions can be seen. The 64 pages of homework solutions are on a secure web site that requires a user ID and password that can be obtained from the authors. (3) Any errors found in the textbook are immediately posted on an "Errata" web page. Many of these errors are found by instructors who are using the curriculum (and they are given credit for finding the errors!). The text becomes an interactive process 7. ObsPy - A Python Toolbox for Seismology - and Applications Science.gov (United States) Krischer, L.; Megies, T.; Barsch, R.; MacCarthy, J.; Lecocq, T.; Koymans, M. R.; Carothers, L.; Eulenfeld, T.; Reyes, C. G.; Falco, N.; Sales de Andrade, E. 2017-12-01 8. Lithospheric structure and deformation of the North American continent OpenAIRE Magdala Tesauro; Mikhail Kaban; S. Cloetingh; W. D. Mooney 2013-01-01 We estimate the integrated strength and elastic thickness (Te) of the North American lithosphere based on thermal, density and structural (seismic) models of the crust and upper mantle. The temperature distribution in the lithosphere is estimated considering for the first time the effect of composition as a result of the integrative approach based on a joint analysis of seismic and gravity data. We do this via an iterative adjustment of the model. The upper mantle temperatures are initially e... 9. Solving some problems of engineering seismology by structural method International Nuclear Information System (INIS) Ishtev, K.G.; Hadjikov, L.M.; Dineva, P.S.; Jordanov, P.P. 1983-01-01 The work suggests a method for solving the direct and inverse problems of the engineer seismology by means of the structural approach of the systems theory. This approach gives a possibility for a simultaneous accounting of the two basic types of damping of the seismic signals in the earth foundation-geometrical damping and a damping in consequence of a dissipative energy loss. By the structural scheme an automatic account is made of the geometric damping of the signals. The damping from a dissipative energy loss on the other hand is accounted for through a choice of the type of frequency characteristics or the transmission functions of the different layers. With a few examples the advantages of the model including the two types of attenuation of the seismic signal are illustrated. An integral coefficient of damping is calculated which analogously to the frequency functions represents a generalized characteristic of is the whole earth foundation. (orig./HP) 10. Can mobile phones used in strong motion seismology? Science.gov (United States) D'Alessandro, Antonino; D'Anna, Giuseppe 2013-04-01 Micro Electro-Mechanical Systems (MEMS) accelerometers are electromechanical devices able to measure static or dynamic accelerations. In the 1990s MEMS accelerometers revolutionized the automotive-airbag system industry and are currently widely used in laptops, game controllers and mobile phones. Nowadays MEMS accelerometers seems provide adequate sensitivity, noise level and dynamic range to be applicable to earthquake strong motion acquisition. The current use of 3 axes MEMS accelerometers in mobile phone maybe provide a new means to easy increase the number of observations when a strong earthquake occurs. However, before utilize the signals recorded by a mobile phone equipped with a 3 axes MEMS accelerometer for any scientific porpoise, it is fundamental to verify that the signal collected provide reliable records of ground motion. For this reason we have investigated the suitability of the iPhone 5 mobile phone (one of the most popular mobile phone in the world) for strong motion acquisition. It is provided by several MEMS devise like a three-axis gyroscope, a three-axis electronic compass and a the LIS331DLH three-axis accelerometer. The LIS331DLH sensor is a low-cost high performance three axes linear accelerometer, with 16 bit digital output, produced by STMicroelectronics Inc. We have tested the LIS331DLH MEMS accelerometer using a vibrating table and the EpiSensor FBA ES-T as reference sensor. In our experiments the reference sensor was rigidly co-mounted with the LIS331DHL MEMS sensor on the vibrating table. We assessment the MEMS accelerometer in the frequency range 0.2-20 Hz, typical range of interesting in strong motion seismology and earthquake engineering. We generate both constant and damped sine waves with central frequency starting from 0.2 Hz until 20 Hz with step of 0.2 Hz. For each frequency analyzed we generate sine waves with mean amplitude 50, 100, 200, 400, 800 and 1600 mg0. For damped sine waves we generate waveforms with initial amplitude 11. Seismic Constraints on the Lithosphere-Asthenosphere Boundary Beneath the Izu-Bonin Area: Implications for the Oceanic Lithospheric Thinning Science.gov (United States) Cui, Qinghui; Wei, Rongqiang; Zhou, Yuanze; Gao, Yajian; Li, Wenlan 2018-01-01 The lithosphere-asthenosphere boundary (LAB) is the seismic discontinuity with negative velocity contrasts in the upper mantle. Seismic detections on the LAB are of great significance in understanding the plate tectonics, mantle convection and lithospheric evolution. In this paper, we study the LAB in the Izu-Bonin subduction zone using four deep earthquakes recorded by the permanent and temporary seismic networks of the USArray. The LAB is clearly revealed with sP precursors (sdP) through the linear slant stacking. As illustrated by reflected points of the identified sdP phases, the depth of LAB beneath the Izu-Bonin Arc (IBA) is about 65 km with a range of 60-68 km. The identified sdP phases with opposite polarities relative to sP phases have the average relative amplitude of 0.21, which means a 3.7% velocity drop and implies partial melting in the asthenosphere. On the basis of the crustal age data, the lithosphere beneath the IBA is located at the 1100 °C isotherm calculated with the GDH1 model. Compared to tectonically stable areas, such as the West Philippine Basin (WPB) and Parece Vela Basin (PVB) in the Philippine Sea, the lithosphere beneath the Izu-Bonin area shows the obvious lithospheric thinning. According to the geodynamic and petrological studies, the oceanic lithospheric thinning phenomenon can be attributed to the strong erosion of the small-scale convection in the mantle wedge enriched in volatiles and melts. 12. Global model for the lithospheric strength and effective elastic thickness Science.gov (United States) Tesauro, Magdala; Kaban, Mikhail K.; Cloetingh, Sierd A. P. L. 2013-08-01 Global distribution of the strength and effective elastic thickness (Te) of the lithosphere are estimated using physical parameters from recent crustal and lithospheric models. For the Te estimation we apply a new approach, which provides a possibility to take into account variations of Young modulus (E) within the lithosphere. In view of the large uncertainties affecting strength estimates, we evaluate global strength and Te distributions for possible end-member 'hard' (HRM) and a 'soft' (SRM) rheology models of the continental crust. Temperature within the lithosphere has been estimated using a recent tomography model of Ritsema et al. (2011), which has much higher horizontal resolution than previous global models. Most of the strength is localized in the crust for the HRM and in the mantle for the SRM. These results contribute to the long debates on applicability of the "crème brulée" or "jelly-sandwich" model for the lithosphere structure. Changing from the SRM to HRM turns most of the continental areas from the totally decoupled mode to the fully coupled mode of the lithospheric layers. However, in the areas characterized by a high thermal regime and thick crust, the layers remain decoupled even for the HRM. At the same time, for the inner part of the cratons the lithospheric layers are coupled in both models. Therefore, rheological variations lead to large changes in the integrated strength and Te distribution in the regions characterized by intermediate thermal conditions. In these areas temperature uncertainties have a greater effect, since this parameter principally determines rheological behavior. Comparison of the Te estimates for both models with those determined from the flexural loading and spectral analysis shows that the 'hard' rheology is likely applicable for cratonic areas, whereas the 'soft' rheology is more representative for young orogens. 13. Moving towards persistent identification in the seismological community Science.gov (United States) Quinteros, Javier; Evans, Peter; Strollo, Angelo; Ulbricht, Damian; Elger, Kirsten; Bertelmann, Roland 2016-04-01 The GEOFON data centre and others in the seismological community have been archiving seismic waveforms for many years. The amount of seismic data available continuously increases due to the use of higher sampling rates and the growing number of stations. In recent years, there is a trend towards standardization of the protocols and formats to improve and homogenise access to these data [FDSN, 2013]. The seismological community has begun assigning a particular persistent identifier (PID), the Digital Object Identifier (DOI), to seismic networks as a first step for properly and consistently attributing the use of data from seismic networks in scientific articles [Evans et al., 2015]. This was codified in a recommendation by the international Federation of Digital Seismic Networks [FDSN, 2014]; DOIs for networks now appear in community web pages. However, our community, in common with other fields of science, still struggles with issues such as: supporting reproducibility of results; providing proper attribution (data citation) for data sets; and measuring the impact (by tracking their use) of, those data sets. Seismological data sets used for research are frequently created "on-the-fly" based on particular user requirements such as location or time period; users prepare requests to select subsets of the data held in seismic networks; the data actually provided may even be held at many different data centres [EIDA, 2016]. These subsets also require careful citation. For persistency, a request must receive exactly the same data when repeated at a later time. However, if data are curated between requests, the data set delivered may differ, severely complicating the ability to reproduce a result. Transmission problems or configuration problems may also inadvertently modify the response to a request. With this in mind, our next step is the assignment of additional EPIC-PIDs to daily data files (currently over 28 million in the GEOFON archive) for use within the data 14. Towards a single seismological service infrastructure in Europe Science.gov (United States) Spinuso, A.; Trani, L.; Frobert, L.; Van Eck, T. 2012-04-01 In the last five year services and data providers, within the seismological community in Europe, focused their efforts in migrating the way of opening their archives towards a Service Oriented Architecture (SOA). This process tries to follow pragmatically the technological trends and available solutions aiming at effectively improving all the data stewardship activities. These advancements are possible thanks to the cooperation and the follow-ups of several EC infrastructural projects that, by looking at general purpose techniques, combine their developments envisioning a multidisciplinary platform for the earth observation as the final common objective (EPOS, Earth Plate Observation System) One of the first results of this effort is the Earthquake Data Portal (http://www.seismicportal.eu), which provides a collection of tools to discover, visualize and access a variety of seismological data sets like seismic waveform, accelerometric data, earthquake catalogs and parameters. The Portal offers a cohesive distributed search environment, linking data search and access across multiple data providers through interactive web-services, map-based tools and diverse command-line clients. Our work continues under other EU FP7 projects. Here we will address initiatives in two of those projects. The NERA, (Network of European Research Infrastructures for Earthquake Risk Assessment and Mitigation) project will implement a Common Services Architecture based on OGC services APIs, in order to provide Resource-Oriented common interfaces across the data access and processing services. This will improve interoperability between tools and across projects, enabling the development of higher-level applications that can uniformly access the data and processing services of all participants. This effort will be conducted jointly with the VERCE project (Virtual Earthquake and Seismology Research Community for Europe). VERCE aims to enable seismologists to exploit the wealth of seismic data 15. Post-processing scheme for modelling the lithospheric magnetic field Directory of Open Access Journals (Sweden) V. Lesur 2013-03-01 Full Text Available We investigated how the noise in satellite magnetic data affects magnetic lithospheric field models derived from these data in the special case where this noise is correlated along satellite orbit tracks. For this we describe the satellite data noise as a perturbation magnetic field scaled independently for each orbit, where the scaling factor is a random variable, normally distributed with zero mean. Under this assumption, we have been able to derive a model for errors in lithospheric models generated by the correlated satellite data noise. Unless the perturbation field is known, estimating the noise in the lithospheric field model is a non-linear inverse problem. We therefore proposed an iterative post-processing technique to estimate both the lithospheric field model and its associated noise model. The technique has been successfully applied to derive a lithospheric field model from CHAMP satellite data up to spherical harmonic degree 120. The model is in agreement with other existing models. The technique can, in principle, be extended to all sorts of potential field data with "along-track" correlated errors. 16. Creating a Facebook Page for the Seismological Society of America Science.gov (United States) Newman, S. B. 2009-12-01 17. Regional geology, tectonic, geomorphology and seismology studies to interest to nuclear power plants at Itaorna beach International Nuclear Information System (INIS) Hasui, Y.; Almeida, F.F.M. de; Mioto, J.A.; Melo, M.S. de. 1982-01-01 The study prepared for the nuclear power plants to be located at Itaorna comprised, the analysis and integration of Geologic, tectonic, geomorphologic and seismologic information and satisfactory results of regional stability were obtained. (L.H.L.L.) [pt 18. Recent research in earth structure, earthquake and mine seismology, and seismic hazard evaluation in South Africa CSIR Research Space (South Africa) Wright, C 2003-07-01 Full Text Available of earthquakes, earthquake hazard and earth structure in South Africa was prepared for the centennial handbook of the Interna- tional Association of Seismology and the Physics of the Earth?s Interior(IASPEI).3 Referencestothesescompletedinthelastfour... 19. Mobile and modular. BGR develops seismological monitoring stations for universal applications International Nuclear Information System (INIS) Hinz, Erwin; Hanneken, Mark 2016-01-01 BGR seismologists often set up monitoring stations for testing purposes. The engineers from the Central Seismological Observatory have now developed a new type of mobile monitoring station which can be remotely controlled. 20. QuakeML: status of the XML-based seismological data exchange format OpenAIRE Joachim Saul; Philipp Kästli; Fabian Euchner; Danijel Schorlemmer 2011-01-01 QuakeML is an XML-based data exchange standard for seismology that is in its fourth year of active community-driven development. Its development was motivated by the need to consolidate existing data formats for applications in statistical seismology, as well as setting a cutting-edge, community-agreed standard to foster interoperability of distributed infrastructures. The current release (version 1.2) is based on a public Request for Comments process and accounts for suggestions and comments... 1. Urban Seismology: on the origin of earth vibrations within a city OpenAIRE Díaz, Jordi; Ruiz, Mario; Sánchez-Pastor, Pilar S.; Romero, Paula 2017-01-01 Urban seismology has become an active research field in the recent years, both with seismological objectives, as obtaining better microzonation maps in highly populated areas, and with engineering objectives, as the monitoring of traffic or the surveying of historical buildings. We analyze here the seismic records obtained by a broad-band seismic station installed in the ICTJA-CSIC institute, located near the center of Barcelona city. Although this station was installed to introdu... 2. Research and development activities of the Seismology Section for the period January 1988-December 1989 International Nuclear Information System (INIS) Kumar, Vijay; Murty, G.S. 1990-01-01 This report summarises the research and development activities of the Seismology Section during the periods from January 1988 to December 1989. Apart from the ongoing work on forensic seismology, seismicity studies, rock burst monitoring, elastic wave propagation, a new field system became operational at Bhatsa, located about 100 km from Bombay, comprising 11 station radio-telemetered seismic network with a central recording laboratory to study the reservoir induced seismicity. (author). figs., tabs 3. State-of-the-art of the historical seismology in Colombia Directory of Open Access Journals (Sweden) 2004-06-01 Full Text Available In Colombia are available a discreet number of historical seismology investigations, dating back 50 years. This paper reviews basic information about earthquakes studies in Colombia, such as primary sources, compilation of descriptive catalogues and parametric catalogues. Father Jesús Emilio Ramírez made the main systematic study before 1975. During the last 20 years, great earthquakes hit Colombia and, as consequence, historical seismology investigation was developed in the frame of seismic hazard projects. 4. Constraints on Composition, Structure and Evolution of the Lithosphere Science.gov (United States) Bianchini, Gianluca; Bonadiman, Costanza; Aulbach, Sonja; Schutt, Derek 2015-05-01 The idea for this special issue was triggered at the Goldschmidt Conference held in Florence (August 25-30, 2013), where we convened a session titled "Integrated Geophysical-Geochemical Constraints on Composition and Structure of the Lithosphere". The invitation to contribute was extended not only to the session participants but also to a wider spectrum of colleagues working on related topics. Consequently, a diverse group of Earth scientists encompassing geophysicists, geodynamicists, geochemists and petrologists contributed to this Volume, providing a comprehensive overview on the nature and evolution of lithospheric mantle by combining studies that exploit different types of data and interpretative approaches. The integration of geochemical and geodynamic datasets and their interpretation represents the state of the art in our knowledge of the lithosphere and beyond, and could serve as a blueprint for future strategies in concept and methodology to advance our knowledge of this and other terrestrial reservoirs. 5. Effects of magnitude, depth, and time on cellular seismology forecasts Science.gov (United States) Fisher, Steven Wolf This study finds that, in most cases analyzed to date, past seismicity tends to delineate zones where future earthquakes are likely to occur. Network seismicity catalogs for the New Madrid Seismic Zone (NMSZ), Australia (AUS), California (CA), and Alaska (AK) are analyzed using modified versions of the Cellular Seismology (CS) method of Kafka (2002, 2007). The percentage of later occurring earthquakes located near earlier occurring earthquakes typically exceeds the expected percentage for randomly distributed later occurring earthquakes, and the specific percentage is influenced by several variables, including magnitude, depth, time, and tectonic setting. At 33% map area coverage, hit percents are typically 85-95% in the NMSZ, 50-60% in AUS, 75-85% in CA, and 75-85% in AK. Statistical significance testing is performed on trials analyzing the same variables so that the overall regions can be compared, although some tests are inconclusive due to the small number of earthquake sample sizes. These results offer useful insights into understanding the capabilities and limits of CS studies, which can provide guidance for improving the seismicity-based components of seismic hazard assessments. 6. Ambient seismic noise tomography for exploration seismology at Valhall Science.gov (United States) de Ridder, S. A. 2011-12-01 Permanent ocean-bottom cables installed at the Valhall field can repeatedly record high quality active seismic surveys. But in the absence of active seismic shooting, passive data can be recorded and streamed to the platform in real time. Here I studied 29 hours of data using seismic interferometry. I generate omni-directional Scholte-wave virtual-sources at frequencies considered very-low in the exploration seismology community (0.4-1.75 Hz). Scholte-wave group arrival times are inverted using both eikonal tomography and straight-ray tomography. The top 100 m near-surface at Valhall contains buried channels about 100 m wide that have been imaged with active seismic. Images obtained by ASNT using eikonal tomography or straight-ray tomography both contain anomalies that match these channels. When continuous recordings are made in real-time, tomography images of the shallow subsurface can be formed or updated on a daily basis, forming a very low cost near-surface monitoring system using seismic noise. 7. Jovian seismology: preliminary results of the SYMPA instrument Science.gov (United States) Gaulme, P.; Schmider, F. X.; Gay, J.; Jacob, C.; Jeanneaux, F.; Alvarez, M.; Reyes, M.; Valtier, J. C.; Fossat, E.; Palle, P. L.; Belmonte, J. C.; Gelly, B. 2006-06-01 Jupiter's internal structure is poorly known (Guillot et al. 2004). Seismology is a powerful tool to investigate the internal structure of planets and stars, by analyzing how acoustic waves propagate. Mosser (1997) and Gudkova & Zarkhov (1999) showed that the detection and the identification of non-radial modes up to degree ℓ=25 can constrain strongly the internal structure. SYMPA is a ground-based network project dedicated to the Jovian oscillations (Schmider et al. 2002). The instrument is composed of a Mach-Zehnder interferometer producing four interferograms of the planetary spectrum. The combination of the four images in phase quadrature allows the reconstruction of the incident light phase, which is related to the Doppler shift generated by the oscillations. Two SYMPA instruments were built at the Nice university and were used simultaneously during two observation campaigns, in 2004 and 2005, at the San Pedro Martir observatory (Mexico) and the Teide observatory (Las Canarias). We will present for the first time the data processing and the preliminary results of the experiment. CERN Document Server Slawinski, Michael A 2016-01-01 The author dedicates this book to readers who are concerned with finding out the status of concepts, statements and hypotheses, and with clarifying and rearranging them in a logical order. It is thus not intended to teach tools and techniques of the trade, but to discuss the foundations on which seismology — and in a larger sense, the theory of wave propagation in solids — is built. A key question is: why and to what degree can a theory developed for an elastic continuum be used to investigate the propagation of waves in the Earth, which is neither a continuum nor fully elastic. But the scrutiny of the foundations goes much deeper: material symmetry, effective tensors, equivalent media; the influence (or, rather, the lack thereof) of gravitational and thermal effects and the rotation of the Earth, are discussed ab initio. The variational principles of Fermat and Hamilton and their consequences for the propagation of elastic waves, causality, Noether's theorem and its consequences on conservation of energy... 9. Extensional and compressional instabilities in icy satellite lithospheres International Nuclear Information System (INIS) Herrick, D.L.; Stevenson, D.J. 1990-01-01 The plausibility of invoking a lithospheric instability mechanism to account for the grooved terrains on Ganymede, Encedalus, and Miranda is presently evaluated in light of the combination of a simple mechanical model of planetary lithospheres and asthenospheres with recent experimental data for the brittle and ductile deformation of ice. For Ganymede, high surface gravity and warm temperatures render the achievement of an instability sufficiently great for the observed topographic relief virtually impossible; an instability of sufficient strength, however, may be able to develop on such smaller, colder bodies as Encedalus and Miranda. 15 refs 10. Integrating EarthScope Data to Constrain the Long-Term Effects of Tectonism on Continental Lithosphere Science.gov (United States) Porter, R. C.; van der Lee, S. 2017-12-01 One of the most significant products of the EarthScope experiment has been the development of new seismic tomography models that take advantage of the consistent station design, regular 70-km station spacing, and wide aperture of the EarthScope Transportable Array (TA) network. These models have led to the discovery and interpretation of additional compositional, thermal, and density anomalies throughout the continental US, especially within tectonically stable regions. The goal of this work is use data from the EarthScope experiment to better elucidate the temporal relationship between tectonic activity and seismic velocities. To accomplish this, we compile several upper-mantle seismic velocity models from the Incorporated Research Institute for Seismology (IRIS) Earth Model Collaboration (EMC) and compare these to a tectonic age model we compiled using geochemical ages from the Interdisciplinary Earth Data Alliance: EarthChem Database. Results from this work confirms quantitatively that the time elapsed since the most recent tectonic event is a dominant influence on seismic velocities within the upper mantle across North America. To further understand this relationship, we apply mineral-physics models for peridotite to estimate upper-mantle temperatures for the continental US from tomographically imaged shear velocities. This work shows that the relationship between the estimated temperatures and the time elapsed since the most recent tectonic event is broadly consistent with plate cooling models, yet shows intriguing scatter. Ultimately, this work constrains the long-term thermal evolution of continental mantle lithosphere. 11. Deformation of the Pannonian lithosphere and related tectonic topography: a depth-to-surface analysis NARCIS (Netherlands) Dombrádi, E. 2012-01-01 Fingerprints of deep-seated, lithospheric deformation are often recognised on the surface, contributing to topographic evolution, drainage organisation and mass transport. Interactions between deep and surface processes were investigated in the Carpathian-Pannonian region. The lithosphere beneath 12. STSHV a teleinformatic system for historic seismology in Venezuela Science.gov (United States) Choy, J. E.; Palme, C.; Altez, R.; Aranguren, R.; Guada, C.; Silva, J. 2013-05-01 From 1997 on, when the first "Jornadas Venezolanas de Sismicidad Historica" took place, a big interest awoke in Venezuela to organize the available information related to historic earthquakes. At that moment only existed one published historic earthquake catalogue, that from Centeno Grau published the first time in 1949. That catalogue had no references about the sources of information. Other catalogues existed but they were internal reports for the petroleum companies and therefore difficult to access. In 2000 Grases et al reedited the Centeno-Grau catalogue, it ended up in a new, very complete catalogue with all the sources well referenced and updated. The next step to organize historic seismicity data was, from 2004 to 2008, the creation of the STSHV (Sistema de teleinformacion de Sismologia Historica Venezolana, http://sismicidad.hacer.ula.ve ). The idea was to bring together all information about destructive historic earthquakes in Venezuela in one place in the internet so it could be accessed easily by a widespread public. There are two ways to access the system. The first one, selecting an earthquake or a list of earthquakes, and the second one, selecting an information source or a list of sources. For each earthquake there is a summary of general information and additional materials: a list with the source parameters published by different authors, a list with intensities assessed by different authors, a list of information sources, a short text summarizing the historic situation at the time of the earthquake and a list of pictures if available. There are searching facilities for the seismic events and dynamic maps can be created. The information sources are classified in: books, handwritten documents, transcription of handwritten documents, documents published in books, journals and congress memories, newspapers, seismologic catalogues and electronic sources. There are facilities to find specific documents or lists of documents with common characteristics 13. Regional dependence in earthquake early warning and real time seismology International Nuclear Information System (INIS) Caprio, M. 2013-01-01 An effective earthquake prediction method is still a Chimera. What we can do at the moment, after the occurrence of a seismic event, is to provide the maximum available information as soon as possible. This can help in reducing the impact of the quake on population or and better organize the rescue operations in case of post-event actions. This study strives to improve the evaluation of earthquake parameters shortly after the occurrence of a major earthquake, and the characterization of regional dependencies in Real-Time Seismology. The recent earthquake experience from Tohoku (M 9.0, 11.03.2011) showed how an efficient EEW systems can inform numerous people and thus potentially reduce the economic and human losses by distributing warning messages several seconds before the arrival of seismic waves. In the case of devastating earthquakes, usually, in the first minutes to days after the main shock, the common communications channels can be overloaded or broken. In such cases, a precise knowledge of the macroseismic intensity distribution will represent a decisive contribution in help management and in the valuation of losses. In this work, I focused on improving the adaptability of EEW systems (chapters 1 and 2) and in deriving a global relationship for converting peak ground motion into macroseismic intensity and vice versa (chapter 3). For EEW applications, in chapter 1 we present an evolutionary approach for magnitude estimation for earthquake early warning based on real-time inversion of displacement spectra. The Spectrum Inversion (SI) method estimates magnitude and its uncertainty by inferring the shape of the entire displacement spectral curve based on the part of the spectra constrained by available data. Our method can be applied in any region without the need for calibration. SI magnitude and uncertainty estimates are updated each second following the initial P detection and potentially stabilize within 10 seconds from the initial earthquake detection 14. Regional dependence in earthquake early warning and real time seismology Energy Technology Data Exchange (ETDEWEB) Caprio, M. 2013-07-01 An effective earthquake prediction method is still a Chimera. What we can do at the moment, after the occurrence of a seismic event, is to provide the maximum available information as soon as possible. This can help in reducing the impact of the quake on population or and better organize the rescue operations in case of post-event actions. This study strives to improve the evaluation of earthquake parameters shortly after the occurrence of a major earthquake, and the characterization of regional dependencies in Real-Time Seismology. The recent earthquake experience from Tohoku (M 9.0, 11.03.2011) showed how an efficient EEW systems can inform numerous people and thus potentially reduce the economic and human losses by distributing warning messages several seconds before the arrival of seismic waves. In the case of devastating earthquakes, usually, in the first minutes to days after the main shock, the common communications channels can be overloaded or broken. In such cases, a precise knowledge of the macroseismic intensity distribution will represent a decisive contribution in help management and in the valuation of losses. In this work, I focused on improving the adaptability of EEW systems (chapters 1 and 2) and in deriving a global relationship for converting peak ground motion into macroseismic intensity and vice versa (chapter 3). For EEW applications, in chapter 1 we present an evolutionary approach for magnitude estimation for earthquake early warning based on real-time inversion of displacement spectra. The Spectrum Inversion (SI) method estimates magnitude and its uncertainty by inferring the shape of the entire displacement spectral curve based on the part of the spectra constrained by available data. Our method can be applied in any region without the need for calibration. SI magnitude and uncertainty estimates are updated each second following the initial P detection and potentially stabilize within 10 seconds from the initial earthquake detection 15. A uniform seismological bulletin for the European - Mediterranean region International Nuclear Information System (INIS) Bossu, R.; Piedfroid, O.; Riviere, F. 2002-01-01 The goal of this EU funded project is to develop means and tools to produce a homogeneous European-Mediterranean seismic bulletin that could serve as a reference. The 3 main objectives are 1) the definition of a unified magnitude scale for M > 3, 2) an improved location of events especially in border regions, 3) Improving rapid and regular data exchange within the European-Mediterranean region. The first step is to define an homogeneous and accurate magnitude estimation for the whole region of interest. Experience shows that the differences in the magnitudes reported by several institutes for a given event may vary up to 1.5 units. Three different magnitude computations are applied on a reference data set of well known events: a Lg waves coda magnitude, a Richter local magnitude and a moment magnitude scale. The comparison of the results is currently carried out. The algorithm associated to the selected magnitude will be implemented locally on a set of stations. New velocity models for border regions are developed from the analysis of the residuals of events recorded by permanent and temporary networks. The robustness and reliability of the 3D models versus 1D model have been evaluated. EMSC gathers via e-mail manually picked seismic phase arrival times with or without associated locations from about 50 seismological institutes of the European- Mediterranean region in a database. These bulletins are automatically merged by unique software. The number of processed events is about 2000 / month and should grow significantly with larger input from Middle East and Northern Africa. Events are then submitted to an automatic analysis of the location reliability, and, for dubious events, to a manual reprocessing. In order to improve data exchange, the installation of autoDRM systems is promoted. (authors) 16. Monitoring the englacial fracture state using virtual-reflector seismology Science.gov (United States) Lindner, F.; Weemstra, C.; Walter, F.; Hadziioannou, C. 2017-12-01 Fracturing and changes in the englacial macroscopic water content change the elastic bulk properties of ice bodies. Small seismic velocity variations, resulting from such changes, can be measured using a technique called coda-wave interferometry. Here, coda refers to the later-arriving, multiply scattered waves. Often, this technique is applied to so-called virtual-source responses, which can be obtained using seismic interferometry (a simple crosscorrelation process). Compared to other media (e.g., the Earth's crust), however, ice bodies exhibit relatively little scattering. This complicates the application of coda-wave interferometry to the retrieved virtual-source responses. In this work, we therefore investigate the applicability of coda-wave interferometry to virtual-source responses obtained using two alternative seismic interferometric techniques, namely, seismic interferometry by multidimensional deconvolution (SI by MDD), and virtual-reflector seismology (VRS). To that end, we use synthetic data, as well as active-source glacier data acquired on Glacier de la Plaine Morte, Switzerland. Both SI by MDD and VRS allow the retrieval of more accurate virtual-source responses. In particular, the dependence of the retrieved virtual-source responses on the illumination pattern is reduced. We find that this results in more accurate glacial phase-velocity estimates. In addition, VRS introduces virtual reflections from a receiver contour (partly) enclosing the medium of interest. By acting as a sort of virtual reverberation, the coda resulting from the application of VRS significantly increases seismic monitoring capabilities, in particular in cases where natural scattering coda is not available. 17. Accuracy assessment of high-rate GPS measurements for seismology Science.gov (United States) Elosegui, P.; Davis, J. L.; Ekström, G. 2007-12-01 Analysis of GPS measurements with a controlled laboratory system, built to simulate the ground motions caused by tectonic earthquakes and other transient geophysical signals such as glacial earthquakes, enables us to assess the technique of high-rate GPS. The root-mean-square (rms) position error of this system when undergoing realistic simulated seismic motions is 0.05~mm, with maximum position errors of 0.1~mm, thus providing "ground truth" GPS displacements. We have acquired an extensive set of high-rate GPS measurements while inducing seismic motions on a GPS antenna mounted on this system with a temporal spectrum similar to real seismic events. We found that, for a particular 15-min-long test event, the rms error of the 1-Hz GPS position estimates was 2.5~mm, with maximum position errors of 10~mm, and the error spectrum of the GPS estimates was approximately flicker noise. These results may however represent a best-case scenario since they were obtained over a short (~10~m) baseline, thereby greatly mitigating baseline-dependent errors, and when the number and distribution of satellites on the sky was good. For example, we have determined that the rms error can increase by a factor of 2--3 as the GPS constellation changes throughout the day, with an average value of 3.5~mm for eight identical, hourly-spaced, consecutive test events. The rms error also increases with increasing baseline, as one would expect, with an average rms error for a ~1400~km baseline of 9~mm. We will present an assessment of the accuracy of high-rate GPS based on these measurements, discuss the implications of this study for seismology, and describe new applications in glaciology. 18. Extension of thickened and hot lithospheres: Inferences from laboratory modeling NARCIS (Netherlands) Tirel, C.; Brun, J.P.; Sokoutis, D. 2006-01-01 The extension of a previously thickened lithosphere is studied through a series of analogue experiments. The models deformed in free and boundary-controlled gravity spreading conditions that simulate the development of wide rift-type and core complex-type structures. In models, the development of 19. European Lithospheric Mantle; geochemical, petrological and geophysical processes Science.gov (United States) Ntaflos, Th.; Puziewicz, J.; Downes, H.; Matusiak-Małek, M. 2017-04-01 The second European Mantle Workshop occurred at the end of August 2015, in Wroclaw, Poland, attended by leading scientists in the study the lithospheric mantle from around the world. It built upon the results of the first European Mantle Workshop (held in 2007, in Ferrara, Italy) published in the Geological Society of London Special Publication 293 (Coltorti & Gregoire, 2008). 20. Lithospheric strength variations in Mainland China : Tectonic implications NARCIS (Netherlands) Deng, Yangfan; Tesauro, M. 2016-01-01 We present a new thermal and strength model for the lithosphere of Mainland China. To this purpose, we integrate a thermal model for the crust, using a 3-D steady state heat conduction equation, with estimates for the upper mantle thermal structure, obtained by inverting a S wave tomography model. 1. Expanding Horizons in Mitigating Earthquake Related Disasters in Urban Areas: Global Development of Real-Time Seismology OpenAIRE Utkucu, Murat; Küyük, Hüseyin Serdar; Demir, İsmail Hakkı 2016-01-01 Abstract Real-time seismology is a newly developing alternative approach in seismology to mitigate earthquake hazard. It exploits up-to-date advances in seismic instrument technology, data acquisition, digital communications and computer systems for quickly transforming data into earthquake information in real-time to reduce earthquake losses and its impact on social and economic life in the earthquake prone densely populated urban and industrial areas.  Real-time seismology systems are not o... 2. Satellite gravity gradient views help reveal the Antarctic lithosphere Science.gov (United States) Ferraccioli, F.; Ebbing, J.; Pappa, F.; Kern, M.; Forsberg, R. 2017-12-01 Here we present and analyse satellite gravity gradient signatures derived from GOCE and superimpose these on tectonic and bedrock topography elements, as well as seismically-derived estimates of crustal thickness for the Antarctic continent. The GIU satellite gravity component images the contrast between the thinner crust and lithosphere underlying the West Antarctic Rift System and the Weddell Sea Rift System and the thicker lithosphere of East Antarctica. The new images also suggest that more distributed wide-mode lithospheric and crustal extension affects both the Ross Sea Embayment and the less well known Ross Ice Shelf segment of the rift system. However, this pattern is less clear towards the Bellingshousen Embayment, indicating that the rift system narrows towards the southern edge of the Antarctic Peninsula. In East Antarctica, the satellite gravity data provides new views into the Archean to Mesoproterozoic Terre Adelie Craton, and clearly shows the contrast wrt to the crust and lithosphere underlying both the Wilkes Subglacial Basin to the east and the Sabrina Subglacial Basin to the west. This finding augments recent interpretations of aeromagnetic and airborne gravity data over the region, suggesting that the Mawson Continent is a composite lithospheric-scale entity, which was affected by several Paleoproterozoic and Mesoproterozoic orogenic events. Thick crust is imaged beneath the Transantarctic Mountains, the Terre Adelie Craton, the Gamburtsev Subglacial Mountains and also Eastern Dronning Maud Land, in particular beneath the recently proposed region of the Tonian Oceanic Arc Superterrane. The GIA and GIU components help delineate the edges of several of these lithospheric provinces. One of the most prominent lithospheric-scale features discovered in East Antarctica from satellite gravity gradient imaging is the Trans East Antarctic Shear Zone that separates the Gamburtsev Province from the Eastern Dronning Maud Land Province and appears to form the 3. Numerical modeling of continental lithospheric weak zone over plume Science.gov (United States) Perepechko, Y. V.; Sorokin, K. E. 2011-12-01 The work is devoted to the development of magmatic systems in the continental lithosphere over diffluent mantle plumes. The areas of tension originating over them are accompanied by appearance of fault zones, and the formation of permeable channels, which are distributed magmatic melts. The numerical simulation of the dynamics of deformation fields in the lithosphere due to convection currents in the upper mantle, and the formation of weakened zones that extend up to the upper crust and create the necessary conditions for the formation of intermediate magma chambers has been carried out. Thermodynamically consistent non-isothermal model simulates the processes of heat and mass transfer of a wide class of magmatic systems, as well as the process of strain localization in the lithosphere and their influence on the formation of high permeability zones in the lower crust. The substance of the lithosphere is a rheologic heterophase medium, which is described by a two-velocity hydrodynamics. This makes it possible to take into account the process of penetration of the melt from the asthenosphere into the weakened zone. The energy dissipation occurs mainly due to interfacial friction and inelastic relaxation of shear stresses. The results of calculation reveal a nonlinear process of the formation of porous channels and demonstrate the diversity of emerging dissipative structures which are determined by properties of both heterogeneous lithosphere and overlying crust. Mutual effect of a permeable channel and the corresponding filtration process of the melt on the mantle convection and the dynamics of the asthenosphere have been studied. The formation of dissipative structures in heterogeneous lithosphere above mantle plumes occurs in accordance with the following scenario: initially, the elastic behavior of heterophase lithosphere leads to the formation of the narrow weakened zone, though sufficiently extensive, with higher porosity. Further, the increase in the width of 4. Lithospheric-scale centrifuge models of pull-apart basins Science.gov (United States) Corti, Giacomo; Dooley, Tim P. 2015-11-01 We present here the results of the first lithospheric-scale centrifuge models of pull-apart basins. The experiments simulate relative displacement of two lithospheric blocks along two offset master faults, with the presence of a weak zone in the offset area localising deformation during strike-slip displacement. Reproducing the entire lithosphere-asthenosphere system provides boundary conditions that are more realistic than the horizontal detachment in traditional 1 g experiments and thus provide a better approximation of the dynamic evolution of natural pull-apart basins. Model results show that local extension in the pull-apart basins is accommodated through development of oblique-slip faulting at the basin margins and cross-basin faults obliquely cutting the rift depression. As observed in previous modelling studies, our centrifuge experiments suggest that the angle of offset between the master fault segments is one of the most important parameters controlling the architecture of pull-apart basins: the basins are lozenge shaped in the case of underlapping master faults, lazy-Z shaped in case of neutral offset and rhomboidal shaped for overlapping master faults. Model cross sections show significant along-strike variations in basin morphology, with transition from narrow V- and U-shaped grabens to a more symmetric, boxlike geometry passing from the basin terminations to the basin centre; a flip in the dominance of the sidewall faults from one end of the basin to the other is observed in all models. These geometries are also typical of 1 g models and characterise several pull-apart basins worldwide. Our models show that the complex faulting in the upper brittle layer corresponds at depth to strong thinning of the ductile layer in the weak zone; a rise of the base of the lithosphere occurs beneath the basin, and maximum lithospheric thinning roughly corresponds to the areas of maximum surface subsidence (i.e., the basin depocentre). 5. Space geodesy validation of the global lithospheric flow Science.gov (United States) Crespi, M.; Cuffaro, M.; Doglioni, C.; Giannone, F.; Riguzzi, F. 2007-02-01 Space geodesy data are used to verify whether plates move chaotically or rather follow a sort of tectonic mainstream. While independent lines of geological evidence support the existence of a global ordered flow of plate motions that is westerly polarized, the Terrestrial Reference Frame (TRF) presents limitations in describing absolute plate motions relative to the mantle. For these reasons we jointly estimated a new plate motions model and three different solutions of net lithospheric rotation. Considering the six major plate boundaries and variable source depths of the main Pacific hotspots, we adapted the TRF plate kinematics by global space geodesy to absolute plate motions models with respect to the mantle. All three reconstructions confirm (i) the tectonic mainstream and (ii) the net rotation of the lithosphere. We still do not know the precise trend of this tectonic flow and the velocity of the differential rotation. However, our results show that assuming faster Pacific motions, as the asthenospheric source of the hotspots would allow, the best lithospheric net rotation estimate is 13.4 +/- 0.7 cm yr-1. This superfast solution seems in contradiction with present knowledge on the lithosphere decoupling, but it matches remarkably better with the geological constraints than those retrieved with slower Pacific motion and net rotation estimates. Assuming faster Pacific motion, it is shown that all plates move orderly westward' along the tectonic mainstream at different velocities and the equator of the lithospheric net rotation lies inside the corresponding tectonic mainstream latitude band (~ +/-7°), defined by the 1σ confidence intervals. 6. Lithosphere mantle density of the North China Craton based on gravity data Science.gov (United States) Xia, B.; Artemieva, I. M.; Thybo, H. 2017-12-01 Based on gravity, seismic and thermal data we constrained the lithospheric mantle density at in-situ and STP condition. The gravity effect of topography, sedimentary cover, Moho and Lithosphere-Asthenosphere Boundary variation were removed from free-air gravity anomaly model. The sedimentary covers with density range from 1.80 g/cm3 with soft sediments to 2.40 g/cm3 with sandstone and limestone sediments. The average crustal density with values of 2.70 - 2.78 g/cm3 which corresponds the thickness and density of the sedimentary cover. Based on the new thermal model, the surface heat flow in original the North China Craton including western block is > 60 mW/m2. Moho temperature ranges from 450 - 600 OC in the eastern block and in the western block is 550 - 650 OC. The thermal lithosphere is 100 -140 km thick where have the surface heat flow of 60 - 70 mW/m2. The gravity effect of surface topography, sedimentary cover, Moho depth are 0 to +150 mGal, - 20 to -120 mGal and +50 to -200 mGal, respectively. By driving the thermal lithosphere, the gravity effect of the lithosphere-asthenosphere boundary ranges from 20 mGal to +200 mGal which shows strong correction with the thickness of the lithosphere. The relationship between the gravity effect of the lithosphere-asthenosphere boundary and the lithosphere thickness also for the seismic lithosphere, and the value of gravity effect is 0 to +220 mGal. The lithospheric mantle residual gravity which caused by lithospheric density variation range from -200 to +50 mGal by using the thermal lithosphere and from -250 to +100 mGal by driving the seismic lithosphere. For thermal lithosphere, the lithospheric mantle density with values of 3.21- 3.26 g/cm3 at in-situ condition and 3.33 - 3.38 g/cm3 at STP condition. Using seismic lithosphere, density of lithosphere ranges from 3.20 - 3.26 g/cm3 at in-situ condition and 3.31 - 3.41 g/cm3 at STP condition. The subcontinental lithosphere of the North China Craton is highly heterogeneous 7. Provenance for Runtime Workflow Steering and Validation in Computational Seismology Science.gov (United States) Spinuso, A.; Krischer, L.; Krause, A.; Filgueira, R.; Magnoni, F.; Muraleedharan, V.; David, M. 2014-12-01 Provenance systems may be offered by modern workflow engines to collect metadata about the data transformations at runtime. If combined with effective visualisation and monitoring interfaces, these provenance recordings can speed up the validation process of an experiment, suggesting interactive or automated interventions with immediate effects on the lifecycle of a workflow run. For instance, in the field of computational seismology, if we consider research applications performing long lasting cross correlation analysis and high resolution simulations, the immediate notification of logical errors and the rapid access to intermediate results, can produce reactions which foster a more efficient progress of the research. These applications are often executed in secured and sophisticated HPC and HTC infrastructures, highlighting the need for a comprehensive framework that facilitates the extraction of fine grained provenance and the development of provenance aware components, leveraging the scalability characteristics of the adopted workflow engines, whose enactment can be mapped to different technologies (MPI, Storm clusters, etc). This work looks at the adoption of W3C-PROV concepts and data model within a user driven processing and validation framework for seismic data, supporting also computational and data management steering. Validation needs to balance automation with user intervention, considering the scientist as part of the archiving process. Therefore, the provenance data is enriched with community-specific metadata vocabularies and control messages, making an experiment reproducible and its description consistent with the community understandings. Moreover, it can contain user defined terms and annotations. The current implementation of the system is supported by the EU-Funded VERCE (http://verce.eu). It provides, as well as the provenance generation mechanisms, a prototypal browser-based user interface and a web API built on top of a NoSQL storage 8. Big Data and High-Performance Computing in Global Seismology Science.gov (United States) Bozdag, Ebru; Lefebvre, Matthieu; Lei, Wenjie; Peter, Daniel; Smith, James; Komatitsch, Dimitri; Tromp, Jeroen 2014-05-01 Much of our knowledge of Earth's interior is based on seismic observations and measurements. Adjoint methods provide an efficient way of incorporating 3D full wave propagation in iterative seismic inversions to enhance tomographic images and thus our understanding of processes taking place inside the Earth. Our aim is to take adjoint tomography, which has been successfully applied to regional and continental scale problems, further to image the entire planet. This is one of the extreme imaging challenges in seismology, mainly due to the intense computational requirements and vast amount of high-quality seismic data that can potentially be assimilated. We have started low-resolution inversions (T > 30 s and T > 60 s for body and surface waves, respectively) with a limited data set (253 carefully selected earthquakes and seismic data from permanent and temporary networks) on Oak Ridge National Laboratory's Cray XK7 "Titan" system. Recent improvements in our 3D global wave propagation solvers, such as a GPU version of the SPECFEM3D_GLOBE package, will enable us perform higher-resolution (T > 9 s) and longer duration (~180 m) simulations to take the advantage of high-frequency body waves and major-arc surface waves, thereby improving imbalanced ray coverage as a result of the uneven global distribution of sources and receivers. Our ultimate goal is to use all earthquakes in the global CMT catalogue within the magnitude range of our interest and data from all available seismic networks. To take the full advantage of computational resources, we need a solid framework to manage big data sets during numerical simulations, pre-processing (i.e., data requests and quality checks, processing data, window selection, etc.) and post-processing (i.e., pre-conditioning and smoothing kernels, etc.). We address the bottlenecks in our global seismic workflow, which are mainly coming from heavy I/O traffic during simulations and the pre- and post-processing stages, by defining new data 9. Twitter Seismology: Earthquake Monitoring and Response in a Social World Science.gov (United States) Bowden, D. C.; Earle, P. S.; Guy, M.; Smoczyk, G. 2011-12-01 10. The electrical lithosphere in Archean cratons: examples from Southern Africa Science.gov (United States) Khoza, D. T.; Jones, A. G.; Muller, M. R.; Webb, S. J. 2011-12-01 The southern African tectonic fabric is made up of a number Archean cratons flanked by Proterozoic and younger mobile belts, all with distinctly different but related geological evolutions. The cratonic margins and some intra-cratonic domain boundaries have played major roles in the tectonics of Africa by focusing ascending magmas and localising cycles of extension and rifting. Of these cratons the southern extent of the Congo craton is one of the least-constrained tectonic boundaries in the African tectonic architecture and knowledge of its geometry and in particular the LAB beneath is crucial for understanding geological process of formation and deformation prevailing in the Archean and later. In this work, which forms a component of the hugely successful Southern African MagnetoTelluric Experiment (SAMTEX), we present the lithospheric electrical resistivity image of the southern boundary of the enigmatic Congo craton and the Neoproterozoic Damara-Ghanzi-Chobe (DGC) orogenic belt on its flanks. Magnetotelluric data were collected along profiles crossing all three of these tectonic blocks. The two dimensional resistivity models resulting from inverting the distortion-corrected responses along the profiles all indicate significant lateral variations in the crust and upper mantle structure along and across strike from the younger DGC orogen to the older adjacent craton. The are significant lithospheric thickness variations from each terrane. The The Moho depth in the DGC is mapped at 40 km by active seismic methods, and is also well constrained by S-wave receiver function models. The Damara belt lithosphere, although generally more conductive and significantly thinner (approximately 150 km) than the adjacent Congo and Kalahari cratons, exhibits upper crustal resistive features interpreted to be caused by igneous intrusions emplaced during the Gondwanan Pan-African magmatic event. The thinned lithosphere is consistent with a 50 mW.m-2 steady-state conductive 11. Montessus de Ballore, a pioneer of seismology: The man and his work Science.gov (United States) Cisternas, Armando 2009-06-01 Ferdinand de Montessus de Ballore was one of the founders of scientific seismology. He was a pioneer in seismology at the same level as Perrey, Mallet, Milne and Omori. He became familiar with earthquakes and volcanoes in Central America (1881-1885). After his experience in El Salvador his interest for understanding earthquakes and volcanoes oriented all of his life. Back in France he worked out a most complete world catalogue of earthquakes with 170.000 events (1885-1907), and completed his career being the head of the Chilean Seismological Service (1907-1923). Many of his ideas were in advance of later discoveries. He was an exceptional writer and published more than 30 books and hundreds of papers. 12. ObsPy: A Python toolbox for seismology - Sustainability, New Features, and Applications Science.gov (United States) Krischer, L.; Megies, T.; Sales de Andrade, E.; Barsch, R.; MacCarthy, J. 2016-12-01 ObsPy (https://www.obspy.org) is a community-driven, open-source project dedicated to offer a bridge for seismology into the scientific Python ecosystem. Amongst other things, it provides Read and write support for essentially every commonly used data format in seismology with a unified interface. This includes waveform data as well as station and event meta information. A signal processing toolbox tuned to the specific needs of seismologists. Integrated access to the largest data centers, web services, and databases. Wrappers around third party codes like libmseed and evalresp. Using ObsPy enables users to take advantage of the vast scientific ecosystem that has developed around Python. In contrast to many other programming languages and tools, Python is simple enough to enable an exploratory and interactive coding style desired by many scientists. At the same time it is a full-fledged programming language usable by software engineers to build complex and large programs. This combination makes it very suitable for use in seismology where research code often must be translated to stable and production ready environments, especially in the age of big data. ObsPy has seen constant development for more than six years and enjoys a large rate of adoption in the seismological community with thousands of users. Successful applications include time-dependent and rotational seismology, big data processing, event relocations, and synthetic studies about attenuation kernels and full-waveform inversions to name a few examples. Additionally it sparked the development of several more specialized packages slowly building a modern seismological ecosystem around it. We will present a short overview of the capabilities of ObsPy and point out several representative use cases and more specialized software built around ObsPy. Additionally we will discuss new and upcoming features, as well as the sustainability of open-source scientific software. 13. 10 CFR 72.103 - Geological and seismological characteristics for applications for dry cask modes of storage on or... Science.gov (United States) 2010-01-01 ... 10 Energy 2 2010-01-01 2010-01-01 false Geological and seismological characteristics for... § 72.103 Geological and seismological characteristics for applications for dry cask modes of storage on... foundation and geological investigation, literature review, and regional geological reconnaissance show no... 14. Preferential mantle lithospheric extension under the South China margin International Nuclear Information System (INIS) Clift, P.; Jian Lin 2001-01-01 Continental rifting in the South China Sea culminated in seafloor spreading at ∼ 30Ma (Late Oligocene). The basin and associated margins form a classic example of break-up in a relatively juvenile arc crust environment. In this study, we documented the timing, distribution and amount of extension in the crust and mantle lithosphere on the South China Margin during this process. Applying a one-dimensional backstripping modeling technique to drilling data from the Pearl River Mouth Basin (PRMB) and Beibu Gulf Basin, we calculated subsidence rates of the wells and examined the timing and amount of extension. Our results show that extension of the crust exceeded that in the mantle lithosphere under the South China Shelf, but that the two varied in phase, suggesting depth-dependent extension rather than a lithospheric-scale detachment. Estimates of total crustal extension derived in this way are similar to those measured by seismic refraction, indicating that isostatic compensation is close to being local. Extension in the Beibu Gulf appears to be more uniform with depth, a difference that we attribute to the different style of strain accommodation during continental break-up compared to intra-continental rifting. Extension in PRMB and South China slope continues for ∼ 5m.y. after the onset of seafloor spreading due to the weakness of the continental lithosphere. The timing of major extension is broadly mid-late Eocene to late Oligocene (∼ 45-25Ma), but is impossible to correlate in detail with poorly dated strike-slip deformation in the Red River Fault Zone. (author) 15. A Swarm lithospheric magnetic field model to SH degree 80 OpenAIRE Thébault, Erwan; Vigneron, Pierre; Langlais, Benoit; Hulot, Gauthier 2016-01-01 International audience; The Swarm constellation of satellites was launched in November 2013 and since then has delivered high-quality scalar and vector magnetic field measurements. A consortium of several research institutions was selected by the European Space Agency to provide a number of scientific products to be made available to the scientific community on a regular basis. In this study, we present the dedicated lithospheric field inversion model. It uses carefully selected magnetic fiel... 16. Mantle Earthquakes in Thinned Proterozoic Lithosphere: Harrat Lunayyir, Saudi Arabia Science.gov (United States) Blanchette, A. R.; Klemperer, S. L.; Mooney, W. D.; Zahran, H. M. 2017-12-01 Harrat Lunayyir is an active volcanic field located in the western Arabian Shield 100 km outside of the Red Sea rift margin. We use common conversion point (CCP) stacking of P-wave receiver functions (PRFs) to show that the Moho is at 38 km depth, close to the 40 km crustal thickness measured in the center of the craton, whereas the lithosphere-asthenosphere boundary (LAB) is at 60 km, far shallower than the 150 km furthest in the craton. We locate 67 high-frequency earthquakes with mL ≤ 2.5 at depths of 40-50 km below the surface, located clearly within the mantle lid. The occurrence of earthquakes within the lithospheric mantle requires a geothermal temperature profile that is below equilibrium. The lithosphere cannot have thinned to its present thickness earlier than 15 Ma, either during an extended period of rifting possibly beginning 24 Ma or, more likely, as part of the second stage of rifting following collision between Arabia and Eurasia. 17. Lithospheric flexural strength and effective elastic thicknesses of the Eastern Anatolia (Turkey) and surrounding region Science.gov (United States) Oruç, Bülent; Gomez-Ortiz, David; Petit, Carole 2017-12-01 The Lithospheric structure of Eastern Anatolia and the surrounding region, including the northern part of the Arabian platform is investigated via the analysis and modeling of Bouguer anomalies from the Earth Gravitational Model EGM08. The effective elastic thickness of the lithosphere (EET) that corresponds to the mechanical cores of the crust and lithospheric mantle is determined from the spectral coherence between Bouguer anomalies and surface elevation data. Its average value is 18.7 km. From the logarithmic amplitude spectra of Bouguer anomalies, average depths of the lithosphere-asthenosphere boundary (LAB), Moho, Conrad and basement in the study area are constrained at 84 km, 39 km, 16 km and 7 km, respectively. The geometries of the LAB and Moho are then estimated using the Parker-Oldenburg inversion algorithm. We also present a lithospheric strength map obtained from the spatial variations of EET determined by Yield Stress Envelopes (YSE). The EET varies in the range of 12-23 km, which is in good agreement with the average value obtained from spectral analysis. Low EET values are interpreted as resulting from thermal and flexural lithospheric weakening. According to the lithospheric strength of the Eastern Anatolian region, the rheology model consists of a strong but brittle upper crust, a weak and ductile lower crust, and a weak lower part of the lithosphere. On the other hand, lithosphere strength corresponds to weak and ductile lower crust, a strong upper crust and a strong uppermost lithospheric mantle for the northern part of the Arabian platform. 18. Metamorphism and Shear Localization in the Oceanic and Continental Lithosphere: A Local or Lithospheric-Scale Effect? Science.gov (United States) Montesi, L. 2017-12-01 Ductile rheologies are characterized by strain rate hardening, which favors deformation zones that are as wide as possible, thus minimizing strain rate and stress. By contrast, plate tectonics and the observation of ductile shear zones in the exposed middle to lower crust show that deformation is often localized, that is, strain (and likely strain rate) is locally very high. This behavior is most easily explained if the material in the shear zone is intrinsically weaker than the reference material forming the wall rocks. Many origins for that weakness have been proposed. They include higher temperature (shear heating), reduced grain size, and fabric. The latter two were shown to be the most effective in the middle crust and upper mantle (given observational limits restricting heating to 50K or less) but they were not very important in the lower crust. They are not sufficient to explain the generation of narrow plate boundaries in the oceans. We evaluate here the importance of metamorphism, especially related to hydration, in weakening the lithosphere. Serpentine is a major player in the dynamics of the oceanic lithosphere. Although its ductile behavior is poorly constrained, serpentine is likely to behave in a brittle or quasi-plastic manner with a reduced coefficient of friction, replacing stronger peridotite. Serpentinization sufficiently weakens the oceanic lithosphere to explain the generation of diffuse plate boundaries and, combined with grain size reduction, the development of narrow plate boundaries. Lower crust outcrops, especially in the Bergen Arc (Norway), display eclogite shear zones hosted in metastable granulites. The introduction of water triggered locally a metamorphic reaction that reduces rock strength and resulted in a ductile shear zone. The presence of these shear zones has been used to explain the weakness of the lower crust perceived from geodesy and seismic activity. We evaluate here how much strain rate may increase as a result of 19. Network of Research Infrastructures for European Seismology (NERIES)—Web Portal Developments for Interactive Access to Earthquake Data on a European Scale OpenAIRE A. Spinuso; L. Trani; S. Rives; P. Thomy; F. Euchner; Danijel Schorlemmer; Joachim Saul; Andres Heinloo; R. Bossu; T. van Eck 2009-01-01 The Network of Research Infrastructures for European Seismology (NERIES) is European Commission (EC) project whose focus is networking together seismological observatories and research institutes into one integrated European infrastructure that provides access to data and data products for research. Seismological institutes and organizations in European and Mediterranean countries maintain large, geographically distributed data archives, therefore this scenario suggested a design approach bas... 20. Using natural laboratories and modeling to decipher lithospheric rheology Science.gov (United States) Sobolev, Stephan 2013-04-01 Rheology is obviously important for geodynamic modeling but at the same time rheological parameters appear to be least constrained. Laboratory experiments give rather large ranges of rheological parameters and their scaling to nature is not entirely clear. Therefore finding rheological proxies in nature is very important. One way to do that is finding appropriate values of rheological parameter by fitting models to the lithospheric structure in the highly deformed regions where lithospheric structure and geologic evolution is well constrained. Here I will present two examples of such studies at plate boundaries. One case is the Dead Sea Transform (DST) that comprises a boundary between African and Arabian plates. During the last 15- 20 Myr more than 100 km of left lateral transform displacement has been accumulated on the DST and about 10 km thick Dead Sea Basin (DSB) was formed in the central part of the DST. Lithospheric structure and geological evolution of DST and DSB is rather well constrained by a number of interdisciplinary projects including DESERT and DESIRE projects leaded by the GFZ Potsdam. Detailed observations reveal apparently contradictory picture. From one hand widespread igneous activity, especially in the last 5 Myr, thin (60-80 km) lithosphere constrained from seismic data and absence of seismicity below the Moho, seem to be quite natural for this tectonically active plate boundary. However, surface heat flow of less than 50-60mW/m2 and deep seismicity in the lower crust ( deeper than 20 km) reported for this region are apparently inconsistent with the tectonic settings specific for an active continental plate boundary and with the crustal structure of the DSB. To address these inconsistencies which comprise what I call the "DST heat-flow paradox", a 3D numerical thermo-mechanical model was developed operating with non-linear elasto-visco-plastic rheology of the lithosphere. Results of the numerical experiments show that the entire set of 1. Seismology and Research in Schools: One School's Experience Science.gov (United States) Tedd, Joe; Tedd, Bernie 2018-01-01 The UK School Seismology Project started in 2007. King Edward VI High School for Girls was one of the fortunate schools to obtain a school seismometer system, free of charge, as an early adopter of the resource. This report outlines our experiences with the system over the past 10 years and describes our recent research on the relationship between… 2. The establishment of the Blacknest seismological database on the Rutherford Laboratory system 360/195 computer International Nuclear Information System (INIS) Blamey, C. 1977-01-01 In order to assess the problems which might arise from monitoring a comprehensive test ban treaty by seismological methods, an experimental monitoring operation is being conducted. This work has involved the establishment of a database on the Rutherford Laboratory 360/195 system computer. The database can be accessed in the UK over the public telephone network and in the USA via ARPANET. (author) 3. Solving seismological problems using SGRAPH program: I-source parameters and hypocentral location International Nuclear Information System (INIS) Abdelwahed, Mohamed F. 2012-01-01 SGRAPH program is considered one of the seismological programs that maintain seismic data. SGRAPH is considered unique for being able to read a wide range of data formats and manipulate complementary tools in different seismological subjects in a stand-alone Windows-based application. SGRAPH efficiently performs the basic waveform analysis and solves advanced seismological problems. The graphical user interface (GUI) utilities and the Windows facilities such as, dialog boxes, menus, and toolbars simplified the user interaction with data. SGRAPH supported the common data formats like, SAC, SEED, GSE, ASCII, and Nanometrics Y-format, and others. It provides the facilities to solve many seismological problems with the built-in inversion and modeling tools. In this paper, I discuss some of the inversion tools built-in SGRAPH related to source parameters and hypocentral location estimation. Firstly, a description of the SGRAPH program is given discussing some of its features. Secondly, the inversion tools are applied to some selected events of the Dahshour earthquakes as an example of estimating the spectral and source parameters of local earthquakes. In addition, the hypocentral location of these events are estimated using the Hypoinverse 2000 program operated by SGRAPH. 4. Jupyter Notebooks for Earth Sciences: An Interactive Training Platform for Seismology Science.gov (United States) Igel, H.; Chow, B.; Donner, S.; Krischer, L.; van Driel, M.; Tape, C. 2017-12-01 We have initiated a community platform (http://www.seismo-live.org) where Python-based Jupyter notebooks (https://jupyter.org) can be accessed and run without necessary downloads or local software installations. The increasingly popular Jupyter notebooks allow the combination of markup language, graphics, and equations with interactive, executable Python code examples. Jupyter notebooks are a powerful and easy-to-grasp tool for students to develop entire projects, scientists to collaborate and efficiently interchange evolving workflows, and trainers to develop efficient practical material. Utilizing the tmpnb project (https://github.com/jupyter/tmpnb), we link the power of Jupyter notebooks with an underlying server, such that notebooks can be run from anywhere, even on smart phones. We demonstrate the potential with notebooks for 1) learning the programming language Python, 2) basic signal processing, 3) an introduction to the ObsPy library (https://obspy.org) for seismology, 4) seismic noise analysis, 5) an entire suite of notebooks for computational seismology (the finite-difference method, pseudospectral methods, finite/spectral element methods, the finite-volume and the discontinuous Galerkin methods, Instaseis), 6) rotational seismology, 7) making results in papers fully reproducible, 8) a rate-and-state friction toolkit, 9) glacial seismology. The platform is run as a community project using Github. Submission of complementary Jupyter notebooks is encouraged. Extension in the near future include linear(-ized) and nonlinear inverse problems. 5. On the Use of Quality Factor in Seismology (Invited) Science.gov (United States) Morozov, I. B. 2009-12-01 Despite its canonical character and widespread use in attenuation studies, suitability of the quality factor Q for describing the Earth still needs to be reviewed. Specifically, we need to consider the following fundamental questions: 1) How close is Q-1 to representing a true medium property? 2) Theoretically, can or should Q-1 be related to complex arguments of the elastic moduli of the medium? and 3) What attenuation property is typically measured and transformed into Q? An attempt for answering these questions shows that not Q but the spatial attenuation coefficient, α, represents a consistent property of energy dissipation by the medium which is also actually measured in most cases. Transformation of α into the apparent Q = πf/αV (V is the wave velocity and f - frequency) makes this Q a phenomenological attribute of the wave and also leads to its built-in positive frequency dependence. Such strong positive Q(f) is often reported, particularly for the “scattering Q,” but it may be entirely due to near-constant values of α. The above transformation is also prone of the well-known uncertainties related to the compensation of geometrical spreading (GS). Q can be adequately used to model wave amplitudes but it includes the effects of the structure (e.g., diving or reflected-wave GS) but becomes ambiguous when applied to describing the in-situ attenuation. By using α(f) in interpretation, such effects are explicitly measured, and several general observations can be made, such as: 1) α(f) often shows linear dependencies on f in both real data and modeling, whereas the corresponding Q(f) may be complex; 2) the zero-frequency limit of α(f)=γ can be interpreted as a generalized measure of geometrical spreading, and 3) frequency-dependent in-situ Q is not as widespread as it is commonly thought. Quantity γ is variable and correlates with tectonic age of the lithosphere, whereas the effective frequency-independent Qe is typically significantly higher than 6. The EGU Seismology Division Early Career Scientist Representative team and its initiatives Science.gov (United States) Parisi, Laura; Ermert, Laura; Gualtieri, Lucia; Spieker, Kathrin; Van Noten, Koen; Agius, Matthew R.; Mai, P. Martin 2017-04-01 Since 2014, the Seismology Division (SM) of the European Geosciences Union (EGU) has its Early Career Scientist (ECS) representative to reach out to its numerous 'younger' members. In April 2016, a new team of representatives joined the Division. We are a vivid team of early career scientists, representing both (either) PhD students and post-doctoral researchers working in different seismological disciplines and different countries. The initiatives of the SM ECS-rep team have various aims: (1) to motivate the ECSs to get involved in activities and initiatives of the EGU and the Seismology Division, (2) to promote the research of ECSs, (3) to discuss issues concerning seismologists during this particular stage of their career, (4) to share ideas on how to promote equality between scientists and (5) to improve on the public dissemination of scientific knowledge. In an effort to reach out to experienced and ECS seismologists more effectively and to continuously encourage to voice their ideas by contributing and following our initiatives, a blog and social media pages dedicated to seismology and earthquake trivia are run by the team. Weekly posts are published on the blog and shared on the social media regarding scientific and social aspects of seismology. One of the major contributions recently introduced to the blog is the "Paper of the Month" series where experienced seismologists write about recent or classical - must read - seismology articles. We also aim to organise and promote social and scientific events. During the EGU General Assembly 2016 a social event was held in Vienna allowing ECS to network with peers in an informal environment. Given the success of this event, a similar event will be organized during the General Assembly 2017. Also, similar to previous years, a short course on basic seismology for non seismologists will be requested and offered to all ECSs attending the General Assembly. Finally, a workshop dedicated entirely to ECSs seismologists 7. Imaging the lithosphere-asthenosphere boundary across the transition from Phanerozoic Europe to the East-European Craton with S-receiver functions Science.gov (United States) Knapmeyer-Endrun, Brigitte; Krüger, Frank 2013-04-01 Cratons are characterized by their thick lithospheric roots. In the case of the Eastern European Craton, high seismic velocities have been imaged tomographically to more than 200 km depth. However, the exact depth extent of the cratonic lithosphere and especially the properties of the transition to a much thinner lithosphere beneath Phanerozoic central Europe still remain under discussion. Whereas a number of recent seismic campaigns has significantly increased the knowledge about crustal structure and Moho topography in central Europe, comparably detailed, 3-D information on upper mantle structure, e.g. the lithosphere-asthenosphere boundary (LAB), is yet missing. The international PASSEQ experiment, which was conducted from 2006 to 2008, strived to fill this gap with the deployment of 196 seismological stations, roughly a quarter of which were equipped with broad-band sensors, between eastern Germany and Lithuania. With a mean inter-station distance of 60 km, reduced to about 20 km along the central profile, PASSEQ offers the densest coverage for a passive experiment in this region yet. Here, we present first S-receiver function results for this data set, complemented by additional data from national and regional networks and other temporary deployments. This increases the number of available broad-band stations to almost 300, though mostly located to the west of the Trans-European Suture Zone (TESZ). Besides, we also process data from short-period (1 s and 5 s) sensors. The visibility of mantle-transition zone phases, even in single-station data, provides confidence in the quality of the obtained S-receiver functions. Moho conversions can be confidently identified for all stations. In case of a low-velocity sedimentary cover, as found for example in the Polish Basin, the S-receiver functions even provide clearer information on Moho depth than the P-receiver functions, which are heavily disturbed by shallow reverberations. For stations west of the TESZ, a clear 8. Structure of the lithosphere-asthenosphere system in the vicinity of the Tristan da Cunha hot spot as seen by surface waves Science.gov (United States) Bonadio, Raffaele; Geissler, Wolfram H.; Ravenna, Matteo; Lebedev, Sergei; Celli, Nicolas L.; Jokat, Wilfried; Jegen, Marion; Sens-Schönfelder, Christoph; Baba, Kiyoshi 2017-04-01 Tristan da Cunha is a volcanic island located above a hotspot in the South Atlantic. The deep mantle plume origin of the hotspot volcanism at the island is supported by anomalous geochemical data (Rohde et al., 2013 [1]) and global seismological evidences (French and Romanovicz, 2015 [2]). However, until recently, due to lack of local geophysical data in the South Atlantic and especially around Tristan da Cunha, the existence of a plume has not yet been confirmed. Therefore, an Ocean Bottom Seismometer experiment was carried out in 2012 and 2013 in the vicinity of the archipelago, with the aim of obtaining geophysical data that may help to get some more detailed insights into the structure of the upper mantle, possibly confirming the existence of a plume. In this work we study the shear wave velocity structure of the lithosphere-asthenosphere system beneath the Island. Rayleigh surface wave phase velocity dispersion curves have been obtained using a recent powerful implementation of the inter-station cross-correlation method (Meier et al., 2004 [3]; Soomro et al., 2016 [4]). The measured dispersion curves are used to invert for the 1D shear wave velocity structure beneath the study area and to obtain phase velocity tomographic maps. Our results show a pronounced low shear wave velocity anomaly between 70 and 120 km depth beneath the area; the lid shows high velocity, suggesting a cold, depleted and dehydrated shallow lithosphere, while the deeper lithosphere shows a velocity structure similar to young or rejuvenated Pacific oceanic lithosphere (Laske et al., 2011 [5]; Goes et al., 2012 [6]). Below the base of the lithosphere, shear wave velocities appear to be low, suggesting thermal effects and partial melting (as confirmed by petrological data). Decreasing velocities within the lithosphere south-westward reflect probably a thermal imprint of an underlying mantle plume. References [1] J.K. Rohde, P. van den Bogaard, K. Hoernle, F. Hauff, R. Werner, Evidence for an 9. SGRAPH (SeismoGRAPHer): Seismic waveform analysis and integrated tools in seismology Science.gov (United States) Abdelwahed, Mohamed F. 2012-03-01 Although numerous seismological programs are currently available, most of them suffer from the inability to manipulate different data formats and the lack of embedded seismological tools. SeismoGRAPHer, or simply SGRAPH, is a new system for maintaining and analyzing seismic waveform data in a stand-alone, Windows-based application that manipulates a wide range of data formats. SGRAPH was intended to be a tool sufficient for performing basic waveform analysis and solving advanced seismological problems. The graphical user interface (GUI) utilities and the Windows functionalities, such as dialog boxes, menus, and toolbars, simplify the user interaction with the data. SGRAPH supports common data formats, such as SAC, SEED, GSE, ASCII, and Nanometrics Y-format, and provides the ability to solve many seismological problems with built-in inversion tools. Loaded traces are maintained, processed, plotted, and saved as SAC, ASCII, or PS (post script) file formats. SGRAPH includes Generalized Ray Theory (GRT), genetic algorithm (GA), least-square fitting, auto-picking, fast Fourier transforms (FFT), and many additional tools. This program provides rapid estimation of earthquake source parameters, location, attenuation, and focal mechanisms. Advanced waveform modeling techniques are provided for crustal structure and focal mechanism estimation. SGRAPH has been employed in the Egyptian National Seismic Network (ENSN) as a tool assisting with routine work and data analysis. More than 30 users have been using previous versions of SGRAPH in their research for more than 3 years. The main features of this application are ease of use, speed, small disk space requirements, and the absence of third-party developed components. Because of its architectural structure, SGRAPH can be interfaced with newly developed methods or applications in seismology. A complete setup file, including the SGRAPH package with the online user guide, is available. 10. Promoting seismology education and research via the IRIS Education and Public Outreach Program Science.gov (United States) Taber, J. J.; Bravo, T. K.; Dorr, P. M.; Hubenthal, M.; Johnson, J. A.; McQuillan, P.; Sumy, D. F.; Welti, R. 2015-12-01 The Incorporated Research Institutions for Seismology's Education and Public Outreach (EPO) program is committed to advancing awareness and understanding of seismology and geophysics, while inspiring careers in the Earth sciences. To achieve this mission, IRIS EPO combines content and research expertise of consortium membership with educational and outreach expertise of IRIS staff to create a portfolio of programs, products, and services that target a range of audiences, including grades 6-12 students and teachers, undergraduate and graduate students, faculty, and the general public. IRIS also partners with UNAVCO and other organizations in support of EarthScope where the facilities are well-suited for sustained engagement of multiple audiences. Examples of research-related EPO products and services include the following resources. Tools developed in collaboration with IRIS Data Services provide public and educational access to data, and to a suite of data products. Teachers can stream seismic data from educational or research sensors into their classroom, and the Active Earth Monitor display, designed for visitor centers, universities and small museums, provides views of recent data along with animations that explain seismology concepts, and stories about recent research. Teachable Moment slide sets, created in collaboration with the University of Portland within 24 hours of major earthquakes, provide interpreted USGS tectonic maps and summaries, animations, visualizations, and other event-specific information so educators can explore newsworthy earthquakes with their students. Intro undergraduate classroom activities have been designed to introduce students to some grand challenges in seismological research, while our Research Experiences for Undergraduates program pairs students with seismology researchers throughout the Consortium and provides the opportunity for the students to present their research at a national meeting. EPO activities are evaluated via a 11. Recent activities of the Seismology Division Early Career Representative(s) Science.gov (United States) Agius, Matthew; Van Noten, Koen; Ermert, Laura; Mai, P. Martin; Krawczyk, CharLotte 2016-04-01 The European Geosciences Union is a bottom-up-organisation, in which its members are represented by their respective scientific divisions, committees and council. In recent years, EGU has embarked on a mission to reach out for its numerous 'younger' members by giving awards to outstanding young scientists and the setting up of Early Career Scientists (ECS) representatives. The division representative's role is to engage in discussions that concern students and early career scientists. Several meetings between all the division representatives are held throughout the year to discuss ideas and Union-wide issues. One important impact ECS representatives have had on EGU is the increased number of short courses and workshops run by ECS during the annual General Assembly. Another important contribution of ECS representatives was redefining 'Young Scientist' to 'Early Career Scientist', which avoids discrimination due to age. Since 2014, the Seismology Division has its own ECS representative. In an effort to more effectively reach out for young seismologists, a blog and a social media page dedicated to seismology have been set up online. With this dedicated blog, we'd like to give more depth to the average browsing experience by enabling young researchers to explore various seismology topics in one place while making the field more exciting and accessible to the broader community. These pages are used to promote the latest research especially of young seismologists and to share interesting seismo-news. Over the months the pages proved to be popular, with hundreds of views every week and an increased number of followers. An online survey was conducted to learn more about the activities and needs of early career seismologists. We present the results from this survey, and the work that has been carried out over the last two years, including detail of what has been achieved so far, and what we would like the ECS representation for Seismology to achieve. Young seismologists are 12. Mantle weakening and strain localization: Implications for the long-term strength of the continental lithosphere OpenAIRE Précigout , Jacques; Gueydan , Frédéric 2009-01-01 International audience; Mechanics of the continental lithosphere require the presence of a high-strength uppermost mantle that defines the "jelly sandwich" model for lithosphere strength layering. However, in deforming regions, growing numbers of geological and geophysical data predict a sub-Moho mantle strength lower than the crustal strength, or a "crème brûlée" model. To reconcile these two opposite views of lithosphere strength layering, we account for a new olivine rheology, which could ... 13. Impact of the lithosphere on dynamic topography: Insights from analogue modeling OpenAIRE Sembroni, Andrea; Kiraly, Agnes; Faccenna, Claudio; Funiciello, Francesca; Becker, Thorsten W.; Goblig, Jan; Fernandez, Manel 2017-01-01 Density anomalies beneath the lithosphere are expected to generate dynamic topography at the Earth's surface due to the induced mantle flow stresses which scale linearly with density anomalies, while the viscosity of the upper mantle is expected to control uplift rates. However, limited attention has been given to the role of the lithosphere. Here we present results from analogue modeling of the interactions between a density anomaly rising in the mantle and the lithosphere in a Newtonian sys... 14. Towards an improved determination of Earth’s lithospheric field from satellite observations DEFF Research Database (Denmark) Kotsiaros, Stavros; Olsen, Nils; Finlay, Chris Perhaps one of the biggest difficulties in modelling the Earth’s lithospheric magnetic field is the separation of contributions from sources of internal and external origin. In particular, the determination of smaller-scale lithospheric magnetic field features is problematic because the lithosphe......Perhaps one of the biggest difficulties in modelling the Earth’s lithospheric magnetic field is the separation of contributions from sources of internal and external origin. In particular, the determination of smaller-scale lithospheric magnetic field features is problematic because... 15. Sub-Moho Reflectors, Mantle Faults and Lithospheric Rheology Science.gov (United States) Brown, L. D. 2013-12-01 One of the most unexpected and dramatic observations from the early years of deep reflection profiling of the continents using multichannel CMP techniques was the existing of prominent reflections from the upper mantle. The first of these, the Flannan thrust/fault/feature, was traced by marine profiling of the continental margin offshore Britain by the BIRPS program, which soon found them to be but one of several clear sub-crustal discontinuities in that area. Subsequently, similar mantle reflectors have been observed in many areas around the world, most commonly beneath Precambrian cratonic areas. Many, but not all, of these mantle reflections appear to arise from near the overlying Moho or within the lower crust before dipping well into the mantle. Others occur as subhorizontal events at various depths with the mantle, with one suite seeming to cluster at a depth of about 75 km. The dipping events have been variously interpreted as mantle roots of crustal normal faults or the deep extension of crustal thrust faults. The most common interpretation, however, is that these dipping events are the relicts of ancient subduction zones, the stumps of now detached Benioff zones long since reclaimed by the deeper mantle. In addition to the BIRPS reflectors, the best known examples include those beneath Fennoscandia in northern Europe, the Abitibi-Grenville of eastern Canada, and the Slave Province of northwestern Canada (e.g. on the SNORCLE profile). The most recently reported example is from beneath the Sichuan Basin of central China. The preservation of these coherent, and relatively delicate appearing, features beneath older continental crust and presumably within equally old (of not older) mantle lithosphere, has profound implications for the history and rheology of the lithosphere in these areas. If they represent, as widely believe, some form of faulting with the lithosphere, they provide corollary constraints on the nature of faulting in both the lower crust and 16. Linking plate reconstructions with deforming lithosphere to geodynamic models Science.gov (United States) Müller, R. D.; Gurnis, M.; Flament, N.; Seton, M.; Spasojevic, S.; Williams, S.; Zahirovic, S. 2011-12-01 While global computational models are rapidly advancing in terms of their capabilities, there is an increasing need for assimilating observations into these models and/or ground-truthing model outputs. The open-source and platform independent GPlates software fills this gap. It was originally conceived as a tool to interactively visualize and manipulate classical rigid plate reconstructions and represent them as time-dependent topological networks of editable plate boundaries. The user can export time-dependent plate velocity meshes that can be used either to define initial surface boundary conditions for geodynamic models or alternatively impose plate motions throughout a geodynamic model run. However, tectonic plates are not rigid, and neglecting plate deformation, especially that of the edges of overriding plates, can result in significant misplacing of plate boundaries through time. A new, substantially re-engineered version of GPlates is now being developed that allows an embedding of deforming plates into topological plate boundary networks. We use geophysical and geological data to define the limit between rigid and deforming areas, and the deformation history of non-rigid blocks. The velocity field predicted by these reconstructions can then be used as a time-dependent surface boundary condition in regional or global 3-D geodynamic models, or alternatively as an initial boundary condition for a particular plate configuration at a given time. For time-dependent models with imposed plate motions (e.g. using CitcomS) we incorporate the continental lithosphere by embedding compositionally distinct crust and continental lithosphere within the thermal lithosphere. We define three isostatic columns of different thickness and buoyancy based on the tectonothermal age of the continents: Archean, Proterozoic and Phanerozoic. In the fourth isostatic column, the oceans, the thickness of the thermal lithosphere is assimilated using a half-space cooling model. We also 17. Anomalous variations of lithosphere magnetic field before several earthquakes Science.gov (United States) Ni, Z.; Chen, B. 2015-12-01 Based on the geomagnetic vector data measured each year since 2011 at more than 500 sites with a mean spatial interval of ~70km.we observed anomalous variations of lithospheric magnetic field before and after over 15 earthquakes having magnitude > 5. We find that the field in near proximity (about 50km) to the epicenter of large earthquakes shows high spatial and temporal gradients before the earthquake. Due to the low frequency of repeat measurements it is unclear when these variations occurred and how do them evolve. We point out anomalous magnetic filed using some circles with radius of 50km usually in June of each year, and then we would check whether quake will locat in our circles during one year after that time (June to next June). Now we caught 10 earthquakes of 15 main shocks having magnitude > 5, most of them located at less than10km away from our circles and some of them were in our circles. Most results show that the variations of lithosphere magnetic filed at the epicenter are different with surrending backgroud usually. When we figure out horizontal variations (vector) of lithosphere magnetic field and epicenter during one year after each June, we found half of them show that the earthquakes will locat at "the inlands in a flowing river", that means earthquakes may occur at "quiet"regions while the backgroud show character as"flow" as liquid. When we compared with GPS results, it appears that these variations of lithospere magnetic field may also correlate with displacement of earth's surface. However we do not compared with GPS results for each earthquake, we are not clear whether these anomalous variations of lithospere magnetic field may also correlate with anomalous displacement of earth's surface. Future work will include developing an automated method for identifying this type of anomalous field behavior and trying to short repeat measurement period to 6 month to try to find when these variations occur. 18. Dynamics of Lithospheric Extension and Residual Topography in Southern Tibet Science.gov (United States) Chen, B.; Shahnas, M. H.; Pysklywec, R.; Sengul Uluocak, E. 2017-12-01 Although the north-south (N-S) convergence between India and Eurasia is ongoing, a number of north-south trending rifts (e.g., Tangra Yum Co Rift, Yadong-Gulu Rift and Cona Rift) and normal faulting are observed at the surface of southern Tibet, suggesting an east-west (E-W) extension tectonic regime. The earthquake focal mechanisms also show that deformation of southern Tibet is dominated by E-W extension across these N-S trending rifts. Because the structure of the lithosphere and underlying mantle is poorly understood, the origin of the east-west extension of southern Tibet is still under debate. Gravitational collapse, oblique convergence, and mantle upwelling are among possible responsible mechanisms. We employ a 3D-spherical control volume model of the present-day mantle flow to understand the relationship between topographic features (e.g., rifts and the west-east extension), intermediate-depth earthquakes, and tectonic stresses induced by mantle flow beneath the region. The thermal structure of the mantle and crust is obtained from P and S-wave seismic inversions and heat flow data. Power-law creep with viscous-plastic rheology, describing the behavior of the lithosphere and mantle material is employed. We determine the models which can best reconcile the observed features of southern Tibet including surface heat flow, residual topography with uplift and subsidence, reported GPS rates of the vertical movements, and the earthquake events. The 3D geodynamic modeling of the contemporary mantle flow-lithospheric response quantifies the relative importance of the various proposed mechanism responsible for the E-W extension and deep earthquakes in southern Tibet. The results also have further implications for the magmatic activities and crustal rheology of the region. 19. Generation of continental rifts, basins, and swells by lithosphere instabilities Science.gov (United States) Fourel, Loïc.; Milelli, Laura; Jaupart, Claude; Limare, Angela 2013-06-01 Continents may be affected simultaneously by rifting, uplift, volcanic activity, and basin formation in several different locations, suggesting a common driving mechanism that is intrinsic to continents. We describe a new type of convective instability at the base of the lithosphere that leads to a remarkable spatial pattern at the scale of an entire continent. We carried out fluid mechanics laboratory experiments on buoyant blocks of finite size that became unstable due to cooling from above. Dynamical behavior depends on three dimensionless numbers, a Rayleigh number for the unstable block, a buoyancy number that scales the intrinsic density contrast to the thermal one, and the aspect ratio of the block. Within the block, instability develops in two different ways in an outer annulus and in an interior region. In the outer annulus, upwellings and downwellings take the form of periodically spaced radial spokes. The interior region hosts the more familiar convective pattern of polygonal cells. In geological conditions, such instabilities should manifest themselves as linear rifts striking at a right angle to the continent-ocean boundary and an array of domal uplifts, volcanic swells, and basins in the continental interior. Simple scaling laws for the dimensions and spacings of the convective structures are derived. For the subcontinental lithospheric mantle, these dimensions take values in the 500-1000 km range, close to geological examples. The large intrinsic buoyancy of Archean lithospheric roots prevents this type of instability, which explains why the widespread volcanic activity that currently affects Western Africa is confined to post-Archean domains. 20. The lithosphere-asthenosphere boundary observed with USArray receiver functions Directory of Open Access Journals (Sweden) P. Kumar 2012-05-01 Full Text Available The dense deployment of seismic stations so far in the western half of the United States within the USArray project provides the opportunity to study in greater detail the structure of the lithosphere-asthenosphere system. We use the S receiver function technique for this purpose, which has higher resolution than surface wave tomography, is sensitive to seismic discontinuities, and is free from multiples, unlike P receiver functions. Only two major discontinuities are observed in the entire area down to about 300 km depth. These are the crust-mantle boundary (Moho and a negative boundary, which we correlate with the lithosphere-asthenosphere boundary (LAB, since a low velocity zone is the classical definition of the seismic observation of the asthenosphere by Gutenberg (1926. Our S receiver function LAB is at a depth of 70–80 km in large parts of westernmost North America. East of the Rocky Mountains, its depth is generally between 90 and 110 km. Regions with LAB depths down to about 140 km occur in a stretch from northern Texas, over the Colorado Plateau to the Columbia basalts. These observations agree well with tomography results in the westernmost USA and on the east coast. However, in the central cratonic part of the USA, the tomography LAB is near 200 km depth. At this depth no discontinuity is seen in the S receiver functions. The negative signal near 100 km depth in the central part of the USA is interpreted by Yuan and Romanowicz (2010 and Lekic and Romanowicz (2011 as a recently discovered mid-lithospheric discontinuity (MLD. A solution for the discrepancy between receiver function imaging and surface wave tomography is not yet obvious and requires more high resolution studies at other cratons before a general solution may be found. Our results agree well with petrophysical models of increased water content in the asthenosphere, which predict a sharp and shallow LAB also in continents (Mierdel et al., 2007. 1. Regional Crustal Deformation and Lithosphere Thickness Observed with Geodetic Techniques Science.gov (United States) Vermeer, M.; Poutanen, M.; Kollo, K.; Koivula, H.; Ahola, J. 2009-04-01 The solid Earth, including the lithosphere, interacts in many ways with other components of the Earth system, oceans, atmosphere and climate. Geodesy is a key provider of data needed for global and environmental research. Geodesy provides methods and accurate measurements of contemporary deformation, sea level and gravity change. The importance of the decades-long stability and availability of reference frames must be stressed for such studies. In the future, the need to accurately monitor 3-D crustal motions will grow, both together with increasingly precise GNSS (Global Navigation Satellite System) positioning, demands for better follow-up of global change, and local needs for crustal motions, especially in coastal areas. These demands cannot yet be satisfied. The project described here is a part of a larger entity: Upper Mantle Dynamics and Quaternary Climate in Cratonic Areas, DynaQlim, an International Lithosphere Project (ILP) -sponsored initiative. The aims of DynaQlim are to understand the relations between upper mantle dynamics, mantle composition, physical properties, temperature and rheology, to study the postglacial uplift and ice thickness models, sea level change and isostatic response, Quaternary climate variations and Weichselian (Laurentian and other) glaciations during the late Quaternary. We aim at studying various aspects of lithospheric motion within the Finnish and Fennoscandian area, but within a global perspective, by the newest geodetic techniques in a multidisciplinary setting. The studies involve observations of three-dimensional motions and gravity change in a multidisciplinary context on a range of spatial scales: the whole of Fennoscandia, Finland, a regional test area of Satakunta, and the local test site Olkiluoto. Objectives of the research include improving our insight into the 3-D motion of a thick lithosphere, and into the gravity effect of the uplift, using novel approaches; improving the kinematic 3-D models in the 2. Lithosphere Response to Intracratonic Rifting: Examples from Europe and Siberia DEFF Research Database (Denmark) Artemieva, I. M.; Thybo, H.; Herceg, M. 2012-01-01 is based on critically assessed results from various seismic studies, including reflection and refraction profiles and receiver function studies. We also use global shear-wave tomography models, gravity constraints based on GOCE data, and thermal models for the lithosphere to speculate on thermo...... of basaltic magmas and consequently in a change in mantle density and seismic velocities. Although kimberlite magmatism is commonly not considered as a rifting events, its deep causes may be similar to the mantle-driven rifting and, as a consequence, modification of mantle density and velocity structure may...... in it seismic wave velocity and density structure.... 3. Geophysical anomalies associated with Imjin River Belt (IRB) in the middle Korean Peninsula revealed by geomagnetic depth sounding and seismological data Science.gov (United States) Yang, J.; Choi, H.; Noh, M.; Im, C. 2012-12-01 Imjin River Belt (IRB), located in the middle of the Korean Peninsula, has been one of long-standing geological issues because it is a very important tectonic link to understand a tectonic evolution of north-eastern Asia including China, Korea and Japan. Although the IRB has been considered as an extension of collision belt between the North China Block (NCB) and South China Block (SCB), there is little geophysical observation or study on this issue. In recent, we compiled a new induction arrow map for the Korean Peninsula, on the basis of long-period magneto-telluric (MT) data and the geomagnetic depth sounding data performed since the late 1990's. This newly compiled map has finer spatial resolution expecially in the middle area of the peninsula, which helps us to present the geophysical evidence that the IRB is the continuation or extension of the collision belt to the peninsula. The overall pattern of induction arrows in the peninsula appears to indicate a northwest-southeast direction, which is well-known 'sea effect' by the surrounding seas. However, the results of observations in the middle of the peninsula distinctly show an anomalous pattern around the IRB, which can not be explained only by the surrounding seas. This anomalous pattern may be attributed to enhanced conductivity associated with tectonic events that Imjin River Belt has experienced. The 3-D electromagnetic modeling results, considering both surrounding seas and enhanced conductivity of the IRB, explain well the anomalous observations around the IRB. Furthermore, recent seismological study demonstrates that focal mechanism around the IRB is mainly normal faulting event, which may be interpreted as the reactivation of paleo structures that are related to the post collisional lithospheric delamination. All the geophysical evidences convince us that the IRB is an extension of the collision belt between the NCB and SCB to the peninsula. 4. Colorado Plateau magmatism and uplift by warming of heterogeneous lithosphere. Science.gov (United States) Roy, Mousumi; Jordan, Thomas H; Pederson, Joel 2009-06-18 The forces that drove rock uplift of the low-relief, high-elevation, tectonically stable Colorado Plateau are the subject of long-standing debate. While the adjacent Basin and Range province and Rio Grande rift province underwent Cenozoic shortening followed by extension, the plateau experienced approximately 2 km of rock uplift without significant internal deformation. Here we propose that warming of the thicker, more iron-depleted Colorado Plateau lithosphere over 35-40 Myr following mid-Cenozoic removal of the Farallon plate from beneath North America is the primary mechanism driving rock uplift. In our model, conductive re-equilibration not only explains the rock uplift of the plateau, but also provides a robust geodynamic interpretation of observed contrasts between the Colorado Plateau margins and the plateau interior. In particular, the model matches the encroachment of Cenozoic magmatism from the margins towards the plateau interior at rates of 3-6 km Myr(-1) and is consistent with lower seismic velocities and more negative Bouguer gravity at the margins than in the plateau interior. We suggest that warming of heterogeneous lithosphere is a powerful mechanism for driving epeirogenic rock uplift of the Colorado Plateau and may be of general importance in plate-interior settings. 5. Continents as lithological icebergs: The importance of buoyant lithospheric roots Science.gov (United States) Abbott, D.H.; Drury, R.; Mooney, W.D. 1997-01-01 An understanding of the formation of new continental crust provides an important guide to locating the oldest terrestrial rocks and minerals. We evaluated the crustal thicknesses of the thinnest stable continental crust and of an unsubductable oceanic plateau and used the resulting data to estimate the amount of mantle melting which produces permanent continental crust. The lithospheric mantle is sufficiently depleted to produce permanent buoyancy (i.e., the crust is unsubductable) at crustal thicknesses greater than 25-27 km. These unsubductable oceanic plateaus and hotspot island chains are important sources of new continental crust. The newest continental crust (e.g., the Ontong Java plateau) has a basaltic composition, not a granitic one. The observed structure and geochemistry of continents are the result of convergent margin magmatism and metamorphism which modify the nascent basaltic crust into a lowermost basaltic layer overlain by a more silicic upper crust. The definition of a continent should imply only that the lithosphere is unsubductable over ??? 0.25 Ga time periods. Therefore, the search for the oldest crustal rocks should include rocks from lower to mid-crustal levels. 6. Mobile and modular. BGR develops seismological monitoring stations for universal applications; Mobil und modular. BGR entwickelt universell einsetzbare seismologische Messstationen Energy Technology Data Exchange (ETDEWEB) Hinz, Erwin; Hanneken, Mark [Bundesanstalt fuer Geowissenschaften und Rohstoffe, Hannover (Germany). Fachbereich ' ' Seismologisches Zentralobservatorium, Kernwaffenteststopp' ' 2016-05-15 BGR seismologists often set up monitoring stations for testing purposes. The engineers from the Central Seismological Observatory have now developed a new type of mobile monitoring station which can be remotely controlled. 7. Strain localization at the margins of strong lithospheric domains: insights from analogue models NARCIS (Netherlands) Calignano, Elisa; Sokoutis, Dimitrios; Willingshofer, Ernst; Gueydan, Frederic; Cloetingh, Sierd The lateral variation of the mechanical properties of continental lithosphere is an important factor controlling the localization of deformation and thus the deformation history and geometry of intra-plate mountain belts. A series of three-layer lithospheric-scale analog models, with a strong domain 8. Implications of a visco-elastic model of the lithosphere for calculating yield strength envelopes NARCIS (Netherlands) Ershov, A.V.; Stephenson, R.A. 2006-01-01 The dominant deformation mechanism in the ductile part of the lithosphere is creep. From a mechanical point of view, creep can be modelled as a viscous phenomenon. On the other hand, yield-strength envelopes (YSEs), commonly used to describe lithosphere rheology, are constructed supposing creep to 9. Lithosphere erosion and continental breakup : Interaction of extension, plume upwelling and melting NARCIS (Netherlands) Lavecchia, Alessio; Thieulot, Cedric; Beekman, Fred; Cloetingh, Sierd; Clark, Stuart 2017-01-01 We present the results of thermo-mechanical modelling of extension and breakup of a heterogeneous continental lithosphere, subjected to plume impingement in presence of intraplate stress field. We incorporate partial melting of the extending lithosphere, underlying upper mantle and plume, caused by 10. Robust high resolution models of the continental lithosphere: Methodology and application to Asia NARCIS (Netherlands) Stolk, W.|info:eu-repo/dai/nl/323259170 2013-01-01 Asia is a key natural laboratory for the study of active intra-continental deformation in far-field response to the ongoing collision ofIndiaandEurasia. The resulting tectonic processes strongly depend on the thermo-mechanical structure of the lithosphere. This lithosphere can be separated into 11. Findings of an evaluation of public involvement programs associated with the development of a Land and Resource Management Plan for the Ouachita National Forest Energy Technology Data Exchange (ETDEWEB) Holthoff, M.G. [Pacific Northwest Lab., Richland, WA (United States); Howell, R.E. [Washington State Univ., Pullman, WA (United States) 1993-08-01 Federal regulations require the United States Forest Service (USFS) to integrate public input and values into decisions concerning land and resource management planning. The USFS has typically relied on traditional methods of involving the public, whereby public access and input to policy development are unilaterally controlled by the agency. Because of the highly political nature of land and resource management planning, such technocratic forms of public involvement and decision-making appear to be proving ineffective. This paper describes and evaluates two public involvement programs associated with the Ouachita National Forests (ONF) lengthy forest planning process. The research consisted of personal interviews with key program leaders and knowledgeable citizen participants, collection of secondary data, and a survey of citizen participants. Because of controversial planning decisions made during an initial planning process, the ONF was forced to re-enter the planning process in order to address unresolved planning issues and to conduct a more effective public involvement program. The supplemental planning process also resulted in a considerable degree of public contention. The survey revealed that although citizen participants were somewhat more satisfied with the supplemental public involvement program relative to the initial program, neither program was viewed as satisfactory. The findings of the study suggest that in order to be more effective, USFS public involvement programs should be more responsive to public concerns and conducted in adherence to principles of collaborative planning. 12. Study of seismological evasion. Part III. Evaluation of evasion possibilities using codas of large earthquakes International Nuclear Information System (INIS) Evernden, J.F. 1976-01-01 The seismological aspects of various proposed means of obscuring or hiding the seismic signatures of explosions from a surveillance network are discussed. These so-called evasion schemes are discussed from the points of view of both the evader and the monitor. The analysis will be conducted in terms of the USSR solely because that country is so vast and the geological/geophysical complexities of the country are so great that the complete spectrum of hypothesized evasion schemes requires discussion. Techniques appropriate for use when the seismic noise problem is interference due to codas of P and surface waves from earthquakes are described, and the capabilities of several seismological networks to restrain use of such codas for effective evasion are analyzed 13. Research and development activities of the Seismology Section for the period January 1984 - December 1985 International Nuclear Information System (INIS) Krishnan, C.A.; Murty, G.S. 1987-01-01 The Research and Development (R and D) activities during 1984-1985 of the Seismology Section of the Bhabha Atomic Research Centre, Bombay are reported in the form of individual summaries. The R and D activities of the Section are directed towards development of seismological instruments and methods of analysis of the seismic field data with the main objective of detecting underground nuclear explosions and assessing seismicity and seismic risk of sites considered for nuclear power stations. The Section has two field stations - one at Gauribidanur in the Southern part of the country and another at Delhi i.e. in the northern part of the country. During the report period, a total of 62 events out of the detected ones were identified as underground explosions. The expertise of the Section is also made available for outside organisations. (M.G.B.) 14. Seismology, 1983, nuclear test ban verification earthquake and earth resource investigation Energy Technology Data Exchange (ETDEWEB) 1984-03-01 This progress report for 1983 is the fourth yearly report summarizing the activities of the Division of Applied Seismology of the National Defence Research Institute (FOA) in Sweden. This division of the Institute is mainly involved in seismic discrimination and nuclear explosion monitoring. Special attention is paid in this report to the development of International Data Centers as a component of a global monitoring system. The division is also conducting a project on seismic risk estimation at nuclear power plants in Sweden. This project includes operating a network of local seismic stations in Sweden. Two seismic exploration projects are also currently being conducted. One project involves the further development of seismic methods for oil exploration, and the other the investigation of crystalline rock using seismic cross hole measurement. Finally the Division of Applied Seismology is conducting a project where seismic sensor in military applications are studied. 15. Seismology: Ways and means for regional cooperation. Transparencies used during the presentation International Nuclear Information System (INIS) Menzhi, M. 1999-01-01 Within the frame of international cooperation in the field of CTBT, this paper describes the first seismologic station established in Morocco in 1934, and in sixties and seventies another 15 stations after the earthquake in Agadir. In 1982, a system for seismic detection was installed having as main objectives he following: coordination and correlation of activities concerned with evaluation of seismic risks in the Mediterranean region, and integration of geophysical data needed for seismic risk assessment 16. Reflections from the interface between seismological research and earthquake risk reduction Science.gov (United States) Sargeant, S. 2012-04-01 Scientific understanding of earthquakes and their attendant hazards is vital for the development of effective earthquake risk reduction strategies. Within the global disaster reduction policy framework (the Hyogo Framework for Action, overseen by the UN International Strategy for Disaster Reduction), the anticipated role of science and scientists is clear, with respect to risk assessment, loss estimation, space-based observation, early warning and forecasting. The importance of information sharing and cooperation, cross-disciplinary networks and developing technical and institutional capacity for effective disaster management is also highlighted. In practice, the degree to which seismological information is successfully delivered to and applied by individuals, groups or organisations working to manage or reduce the risk from earthquakes is variable. The challenge for scientists is to provide fit-for-purpose information that can be integrated simply into decision-making and risk reduction activities at all levels of governance and at different geographic scales, often by a non-technical audience (i.e. people without any seismological/earthquake engineering training). The interface between seismological research and earthquake risk reduction (defined here in terms of both the relationship between the science and its application, and the scientist and other risk stakeholders) is complex. This complexity is a function of a range issues that arise relating to communication, multidisciplinary working, politics, organisational practices, inter-organisational collaboration, working practices, sectoral cultures, individual and organisational values, worldviews and expectations. These factors can present significant obstacles to scientific information being incorporated into the decision-making process. The purpose of this paper is to present some personal reflections on the nature of the interface between the worlds of seismological research and risk reduction, and the 17. Synthetic Analysis of the Effective Elastic Thickness of the Lithosphere in China Science.gov (United States) Lu, Z.; Li, C. 2017-12-01 Effective elastic thickness (Te) represents the response of the lithosphere to a long-term (larger than 105 years) geological loading and reflects the deformation mechanism of plate and its thermodynamic state. Temperature and composition of the lithosphere, coupling between crust and lithospheric mantle, and lithospheric structures affect Te. Regional geology in China is quite complex, influenced by the subduction of the Pacific and Philippine Sea plates in the east and the collision of the Eurasia plate with the India-Australia plate in the southwest. Te can help understand the evolution and strength of the lithospheres in different areas and tectonic units. Here we apply the multitaper coherence method to estimate Te in China using the topography (ETOPO1) and Bouguer gravity anomalies (WGM2012) , at different window sizes (600km*600km, 800km*800km, 1000km*1000km) and moving steps. The lateral variation of Te in China coincides well with the geology. The old stable cratons or basins always correspond to larger Te, whereas the oceanic lithosphere or active orogen blocks tend to get smaller Te. We further correlate Te to curie-point depths (Zb) and heat flow to understand how temperature influences the strength of the lithosphere. Despite of a complex correlation between Te and Zb, good positive correlations are found in the North China Block, Tarim Basin, and Lower Yangtze, showing strong influence of temperature on lithospheric strength. Conversely, the Tibetan Plateau, Upper and Middle Yangtze, and East China Sea Basin even show negative correlation, suggesting that lithospheric structures and compositions play more important roles than temperature in these blocks. We also find that earthquakes tend to occur preferably in a certain range of Te. Deeper earthquakes are more likely to occur where the lithosphere is stronger with larger Te. Crust with a larger Te may also have a deeper ductile-brittle boundary, along which deep large earthquakes tend to cluster. 18. In situ rheology of the oceanic lithosphere along the Hawaiian ridge Science.gov (United States) Pleus, A.; Ito, G.; Wessel, P.; Frazer, L. N. 2017-12-01 Much of our quantitative understanding of lithospheric rheology is based on rock deformation experiments carried out in the laboratory. The accuracy of the relationships between stress and lithosphere deformation, however, are subject to large extrapolations, given that laboratory strain rates (10-7 s-1) are much greater than geologic rates (10-15 to 10-12 s-1). In situ deformation experiments provide independent constraints and are therefore needed to improve our understanding of natural rheology. Zhong and Watts [2013] presented such a study around the main Hawaiian Islands and concluded that the lithosphere flexure requires a much weaker rheology than predicted by laboratory experiments. We build upon this study by investigating flexure around the older volcanoes of the Hawaiian ridge. The ridge is composed of a diversity of volcano sizes that loaded seafloor of nearly constant age (85+/-8 Ma); this fortunate situation allows for an analysis of flexural responses to large variations in applied loads at nearly constant age-dependent lithosphere thermal structure. Our dataset includes new marine gravity and multi-beam bathymetry data collected onboard the Schmidt Ocean Institute's R/V Falkor. These data, along with forward models of lithospheric flexure, are used to obtain a joint posterior probability density function for model parameters that control the lithosphere's flexural response to a given load. These parameters include the frictional coefficient constraining brittle failure in the shallow lithosphere, the activation energy for the low-temperature plasticity regime, and the geothermal gradient of the Hawaiian lithosphere. The resulting in situ rheological parameters may be used to verify or update those derived in the lab. Attaining accurate lithospheric rheological properties is important to our knowledge, not only of the evolution of the Hawaiian lithosphere, but also of other solid-earth geophysical problems, such as oceanic earthquakes, subduction 19. Formation of cratonic lithosphere: An integrated thermal and petrological model Science.gov (United States) Herzberg, Claude; Rudnick, Roberta 2012-09-01 The formation of cratonic mantle peridotite of Archean age is examined within the time frame of Earth's thermal history, and how it was expressed by temporal variations in magma and residue petrology. Peridotite residues that occupy the lithospheric mantle are rare owing to the effects of melt-rock reaction, metasomatism, and refertilization. Where they are identified, they are very similar to the predicted harzburgite residues of primary magmas of the dominant basalts in greenstone belts, which formed in a non-arc setting (referred to here as "non-arc basalts"). The compositions of these basalts indicate high temperatures of formation that are well-described by the thermal history model of Korenaga. In this model, peridotite residues of extensive ambient mantle melting had the highest Mg-numbers, lowest FeO contents, and lowest densities at ~ 2.5-3.5 Ga. These results are in good agreement with Re-Os ages of kimberlite-hosted cratonic mantle xenoliths and enclosed sulfides, and provide support for the hypothesis of Jordan that low densities of cratonic mantle are a measure of their high preservation potential. Cratonization of the Earth reached its zenith at ~ 2.5-3.5 Ga when ambient mantle was hot and extensive melting produced oceanic crust 30-45 km thick. However, there is a mass imbalance exhibited by the craton-wide distribution of harzburgite residues and the paucity of their complementary magmas that had compositions like the non-arc basalts. We suggest that the problem of the missing basaltic oceanic crust can be resolved by its hydration, cooling and partial transformation to eclogite, which caused foundering of the entire lithosphere. Some of the oceanic crust partially melted during foundering to produce continental crust composed of tonalite-trondhjemite-granodiorite (TTG). The remaining lithosphere gravitationally separated into 1) residual eclogite that continued its descent, and 2) buoyant harzburgite diapirs that rose to underplate cratonic nuclei 20. Recent progress in modelling 3D lithospheric deformation Science.gov (United States) Kaus, B. J. P.; Popov, A.; May, D. A. 2012-04-01 Modelling 3D lithospheric deformation remains a challenging task, predominantly because the variations in rock types, as well as nonlinearities due to for example plastic deformation result in sharp and very large jumps in effective viscosity contrast. As a result, there are only a limited number of 3D codes available, most of which are using direct solvers which are computationally and memory-wise very demanding. As a result, the resolutions for typical model runs are quite modest, despite the use of hundreds of processors (and using much larger computers is unlikely to bring much improvement in this situation). For this reason we recently developed a new 3D deformation code,called LaMEM: Lithosphere and Mantle Evolution Model. LaMEM is written on top of PETSc, and as a result it runs on massive parallel machines and we have a large number of iterative solvers available (including geometric and algebraic multigrid methods). As it remains unclear which solver combinations work best under which conditions, we have implemented most currently suggested methods (such as schur complement reduction or Fully coupled iterations). In addition, we can use either a finite element discretization (with Q1P0, stabilized Q1Q1 or Q2P-1 elements) or a staggered finite difference discretization for the same input geometry, which is based on a marker and cell technique). This gives us he flexibility to test various solver methodologies on the same model setup, in terms of accuracy, speed, memory usage etc. Here, we will report on some features of LaMEM, on recent code additions, as well as on some lessons we learned which are important for modelling 3D lithospheric deformation. Specifically we will discuss: 1) How we combine a particle-and-cell method to make it work with both a finite difference and a (lagrangian, eulerian or ALE) finite element formulation, with only minor code modifications code 2) How finite difference and finite element discretizations compare in terms of 1. Implications for anomalous mantle pressure and dynamic topography from lithospheric stress patterns in the North Atlantic Realm DEFF Research Database (Denmark) Schiffer, Christian; Nielsen, Søren Bom 2016-01-01 With convergent plate boundaries at some distance, the sources of the lithospheric stress field of the North Atlantic Realm are mainly mantle tractions at the base of the lithosphere, lithospheric density structure and topography. Given this, we estimate horizontal deviatoric stresses using a wel... 2. SeisCode: A seismological software repository for discovery and collaboration Science.gov (United States) Trabant, C.; Reyes, C. G.; Clark, A.; Karstens, R. 2012-12-01 SeisCode is a community repository for software used in seismological and related fields. The repository is intended to increase discoverability of such software and to provide a long-term home for software projects. Other places exist where seismological software may be found, but none meet the requirements necessary for an always current, easy to search, well documented, and citable resource for projects. Organizations such as IRIS, ORFEUS, and the USGS have websites with lists of available or contributed seismological software. Since the authors themselves do often not maintain these lists, the documentation often consists of a sentence or paragraph, and the available software may be outdated. Repositories such as GoogleCode and SourceForge, which are directly maintained by the authors, provide version control and issue tracking but do not provide a unified way of locating geophysical software scattered in and among countless unrelated projects. Additionally, projects are hosted at language-specific sites such as Mathworks and PyPI, in FTP directories, and in websites strewn across the Web. Search engines are only partially effective discovery tools, as the desired software is often hidden deep within the results. SeisCode provides software authors a place to present their software, codes, scripts, tutorials, and examples to the seismological community. Authors can choose their own level of involvement. At one end of the spectrum, the author might simply create a web page that points to an existing site. At the other extreme, an author may choose to leverage the many tools provided by SeisCode, such as a source code management tool with integrated issue tracking, forums, news feeds, downloads, wikis, and more. For software development projects with multiple authors, SeisCode can also be used as a central site for collaboration. SeisCode provides the community with an easy way to discover software, while providing authors a way to build a community around their 3. The art of communicating seismology to broad audiences: the exhibition which changed the perception Science.gov (United States) Toma-Danila, Dragos; Tataru, Dragos; Nastase, Eduard; Muntean, Alexandra; Partheniu, Raluca 2017-04-01 Seismology is a geoscience often perceived by uninstructed broad audiences as unreliable or inconsistent, since it cannot predict future earthquakes or warn about them effectively; this criticism disregards important achievements that seismology has offered during its more than 100 years of history - such as evidence of Earth's inner structure, knowledge regarding plate tectonics, mineral resource identification, contributions to risk mitigation, monitoring of explosions etc. Moreover, seismology is a field of study with significant advances, which make (or could make) living much safer, in areas with high seismic hazard. We mentioned "could make" since people often fail to understand an important aspect: seismology offers consistent knowledge regarding how to prepare, construct or behave - but it's up to people and authorities to implement the effective measures. In all this story, the effective communication between scientists and the general public plays a major role, making the leap from misconception to relevant impact. As scientists, we wanted to show the true meaning and purpose of seismology to all categories of people. We are in the final stage of the MOBEE (MOBile Earthquake Exhibition) Project implementation, an innovative initiative in a highly seismic country (Romania), where major Vrancea intermediate-depth earthquakes source have the potential to generate a significant amount of damage over large areas; however, unlike countries like Japan, the medium to long period between felt or significant events (20-40 years) is long enough to make the newer generation in Romania disregardful of the hazard, and older generations skeptical about the role of seismology. MOBEE intended to freshen up things, raise awareness and change the overall perception - through new approaches involving a blend of digital content (interactive apps, responsive and continuously updated website), 3D models achieved through new technologies (3D printing, fiber optics), non 4. Dynamics of the Pacific Northwest Lithosphere and Asthenosphere Science.gov (United States) Humphreys, E. 2013-12-01 Seismic imaging resolves a complex structure beneath the Pacific Northwest (PNW) that is interpreted as: an high-velocity piece of accreted (~50 Ma) Farallon lithosphere that deepens from being exposed (at coast, where it is called Siletzia) to lower crust in SE Washington and then descending vertically to ~600 km as a 'curtain' beneath central Idaho; a stubby Juan de Fuca slab (to directed tractions on the Cascadia mega-thrust average ~4 TN per meter of along-strike fault length, or probably a shear stress of ~40 MPa over much of the locked mega-thrust (i.e., much more shear stress than the typical earthquake stress drop of 1-10 MPa). Normal to the coast, southern Cascadia is relatively tensional (where margin-normal compression is less than typical ridge push by ~4 TN/m of along-strike fault length) whereas northern Cascadia is compressional. This indicates that the southern Cascadia mega-thrust is more weakly coupled than the northern mega-thrust. Southern Cascadia slab rollback and extension of the Cascade graben and Basin-and-Range are enabled by the weak coupling, in conjunction with high gravitational potential energy of the southern Oregon arc and back-arc. Juan de Fuca-Gorda lithosphere experiences the same stress on its eastern margin as North America does on the PNW Cascadia margin (by stress continuity), although current models of the individual plates do not show this continuity. Gorda plate is strongly compressed across the Mendocino transform by the north-moving Pacific Plate. Development of the NW-trending Blanco transform has created a fault that avoids this strong compression. 5. Power law olivine crystal size distributions in lithospheric mantle xenoliths Science.gov (United States) Armienti, P.; Tarquini, S. 2002-12-01 Olivine crystal size distributions (CSDs) have been measured in three suites of spinel- and garnet-bearing harzburgites and lherzolites found as xenoliths in alkaline basalts from Canary Islands, Africa; Victoria Land, Antarctica; and Pali Aike, South America. The xenoliths derive from lithospheric mantle, from depths ranging from 80 to 20 km. Their textures vary from coarse to porphyroclastic and mosaic-porphyroclastic up to cataclastic. Data have been collected by processing digital images acquired optically from standard petrographic thin sections. The acquisition method is based on a high-resolution colour scanner that allows image capturing of a whole thin section. Image processing was performed using the VISILOG 5.2 package, resolving crystals larger than about 150 μm and applying stereological corrections based on the Schwartz-Saltykov algorithm. Taking account of truncation effects due to resolution limits and thin section size, all samples show scale invariance of crystal size distributions over almost three orders of magnitude (0.2-25 mm). Power law relations show fractal dimensions varying between 2.4 and 3.8, a range of values observed for distributions of fragment sizes in a variety of other geological contexts. A fragmentation model can reproduce the fractal dimensions around 2.6, which correspond to well-equilibrated granoblastic textures. Fractal dimensions >3 are typical of porphyroclastic and cataclastic samples. Slight bends in some linear arrays suggest selective tectonic crushing of crystals with size larger than 1 mm. The scale invariance shown by lithospheric mantle xenoliths in a variety of tectonic settings forms distant geographic regions, which indicate that this is a common characteristic of the upper mantle and should be taken into account in rheological models and evaluation of metasomatic models. 6. Thinning of heterogeneous lithosphere: insights from field observations and numerical modelling Science.gov (United States) Petri, B.; Duretz, T.; Mohn, G.; Schmalholz, S. M. 2017-12-01 The nature and mechanisms of formation of extremely thinned continental crust (N Italy) and in the Southern Alps (N Italy) were selected for their exceptional level of preservation of rift-related structures. This situation enables us to characterize (1) the pre-rift architecture of the continental lithosphere, (2) the localization of rift-related deformation in distinct portion of the lithosphere and (3) the interaction between initial heterogeneities of the lithosphere and rift-related structures. In a second stage, these observations are integrated in high-resolution, two-dimensional thermo-mechanical models taking into account various patterns of initial mechanical heterogeneities. Our results show the importance of initial pre-rift architecture of the continental lithosphere during rifting. Key roles are given to high-angle and low-angle normal faults, anastomosing shear-zones and decoupling horizons. We propose that during the first stages of thinning, deformation is strongly controlled by the complex pre-rift architecture of the lithosphere, localized along major structures responsible for the lateral extrusion of mid to lower crustal levels. This extrusion juxtaposes mechanically stronger levels in the hyper-thinned continental crust, being exhumed by subsequent low-angle normal faults. Altogether, these results highlight the critical role of the extraction of mechanically strong layers of the lithosphere during the extreme thinning of the continental lithosphere and allows to propose a new model for the formation of continental passive margins. 7. Imaging Canary Island hotspot material beneath the lithosphere of Morocco and southern Spain Science.gov (United States) Miller, Meghan S.; O'Driscoll, Leland J.; Butcher, Amber J.; Thomas, Christine 2015-12-01 The westernmost Mediterranean has developed into its present day tectonic configuration as a result of complex interactions between late stage subduction of the Neo-Tethys Ocean, continental collision of Africa and Eurasia, and the Canary Island mantle plume. This study utilizes S receiver functions (SRFs) from over 360 broadband seismic stations to seismically image the lithosphere and uppermost mantle from southern Spain through Morocco and the Canary Islands. The lithospheric thickness ranges from ∼65 km beneath the Atlas Mountains and the active volcanic islands to over ∼210 km beneath the cratonic lithosphere in southern Morocco. The common conversion point (CCP) volume of the SRFs indicates that thinned lithosphere extends from beneath the Canary Islands offshore southwestern Morocco, to beneath the continental lithosphere of the Atlas Mountains, and then thickens abruptly at the West African craton. Beneath thin lithosphere between the Canary hot spot and southern Spain, including below the Atlas Mountains and the Alboran Sea, there are distinct pockets of low velocity material, as inferred from high amplitude positive, sub-lithospheric conversions in the SRFs. These regions of low seismic velocity at the base of the lithosphere extend beneath the areas of Pliocene-Quaternary magmatism, which has been linked to a Canary hotspot source via geochemical signatures. However, we find that this volume of low velocity material is discontinuous along strike and occurs only in areas of recent volcanism and where asthenospheric mantle flow is identified with shear wave splitting analyses. We propose that the low velocity structure beneath the lithosphere is material flowing sub-horizontally northeastwards beneath Morocco from the tilted Canary Island plume, and the small, localized volcanoes are the result of small-scale upwellings from this material. 8. Lithospheric strucutre and relationship to seismicity beneath the Southeastern US using reciever functions Science.gov (United States) Cunningham, E.; Lekic, V. 2017-12-01 Despite being on a passive margin for millions of years, the Southeastern United States (SEUS) contains numerous seismogenic zones with the ability to produce damaging earthquakes. However, mechanisms controlling these intraplate earthquakes are poorly understood. Recently, Biryol et al. 2016 use P-wave tomography suggest that upper mantle structures beneath the SEUS correlate with areas of seismicity and seismic quiescence. Specifically, thick and fast velocity lithosphere beneath North Carolina is stable and indicative of areas of low seismicity. In contrast, thin and slow velocity lithosphere is weak, and the transition between the strong and weak lithosphere may be correlated with seismogenic zones found in the SEUS. (eg. Eastern Tennessee seismic zone and the Central Virginia seismic zone) Therefore, I systematically map the heterogeneity of the mantle lithosphere using converted seismic waves and quantify the spatial correlation between seismicity and lithospheric structure. The extensive network of seismometers that makes up the Earthscope USArray combined with the numerous seismic deployments in the Southeastern United States allows for unprecedented opportunity to map changes in lithospheric structure across seismogenic zones and seismic quiescent regions. To do so, I will use both P-to-s and S-to-p receiver functions (RFS). Since RFs are sensitive to seismic wavespeeds and density discontinuities with depth, they particularly useful for studying lithospheric structure. Ps receiver functions contain high frequency information allowing for high resolution, but can become contaminated by large sediment signals; therefore, I removed sediment multiples and correct for time delays of later phases using the method of Yu et. al 2015 which will allow us to see later arriving phases associated with lithospheric discontinuities. S-to-p receiver functions are not contaminated by shallow layers, making them ideal to study deep lithospheric structures but they can 9. Seismic and Thermal Structure of the Arctic Lithosphere, From Waveform Tomography and Thermodynamic Modelling Science.gov (United States) Lebedev, S.; Schaeffer, A. J.; Fullea, J.; Pease, V. 2015-12-01 Thermal structure of the lithosphere is reflected in the values of seismic velocities within it. Our new tomographic models of the crust and upper mantle of the Arctic are constrained by an unprecedentedly large global waveform dataset and provide substantially improved resolution, compared to previous models. The new tomography reveals lateral variations in the temperature and thickness of the lithosphere and defines deep boundaries between tectonic blocks with different lithospheric properties and age. The shape and evolution of the geotherm beneath a tectonic unit depends on both crustal and mantle-lithosphere structure beneath it: the lithospheric thickness and its changes with time (these determine the supply of heat from the deep Earth), the crustal thickness and heat production (the supply of heat from within the crust), and the thickness and thermal conductivity of the sedimentary cover (the insulation). Detailed thermal structure of the basins can be modelled by combining seismic velocities from tomography with data on the crustal structure and heat production, in the framework of computational petrological modelling. The most prominent lateral contrasts across the Arctic are between the cold, thick lithospheres of the cratons (in North America, Greenland and Eurasia) and the warmer, non-cratonic blocks. The lithosphere of the Canada Basin is cold and thick, similar to old oceanic lithosphere elsewhere around the world; its thermal structure offers evidence on its lithospheric age and formation mechanism. At 150-250 km depth, the central Arctic region shows a moderate low-velocity anomaly, cooler than that beneath Iceland and N Atlantic. An extension of N Atlantic low-velocity anomaly into the Arctic through the Fram Strait may indicate an influx of N Atlantic asthenosphere under the currently opening Eurasia Basin. 10. The impact of lateral variations in lithospheric thickness on glacial isostatic adjustment in West Antarctica Science.gov (United States) Nield, Grace A.; Whitehouse, Pippa L.; van der Wal, Wouter; Blank, Bas; O'Donnell, John Paul; Stuart, Graham W. 2018-04-01 Differences in predictions of Glacial Isostatic Adjustment (GIA) for Antarctica persist due to uncertainties in deglacial history and Earth rheology. The Earth models adopted in many GIA studies are defined by parameters that vary in the radial direction only and represent a global average Earth structure (referred to as 1D Earth models). Over-simplifying actual Earth structure leads to bias in model predictions in regions where Earth parameters differ significantly from the global average, such as West Antarctica. We investigate the impact of lateral variations in lithospheric thickness on GIA in Antarctica by carrying out two experiments that use different rheological approaches to define 3D Earth models that include spatial variations in lithospheric thickness. The first experiment defines an elastic lithosphere with spatial variations in thickness inferred from seismic studies. We compare the results from this 3D model with results derived from a 1D Earth model that has a uniform lithospheric thickness defined as the average of the 3D lithospheric thickness. Irrespective of deglacial history and sub-lithospheric mantle viscosity, we find higher gradients of present-day uplift rates (i.e. higher amplitude and shorter wavelength) in West Antarctica when using the 3D models, due to the thinner-than-1D-average lithosphere prevalent in this region. The second experiment uses seismically-inferred temperature as input to a power-law rheology thereby allowing the lithosphere to have a viscosity structure. Modelling the lithosphere with a power-law rheology results in behaviour that is equivalent to a thinner-lithosphere model, and it leads to higher amplitude and shorter wavelength deformation compared with the first experiment. We conclude that neglecting spatial variations in lithospheric thickness in GIA models will result in predictions of peak uplift and subsidence that are biased low in West Antarctica. This has important implications for ice-sheet modelling 11. Magma explains low estimates of lithospheric strength based on flexure of ocean island loads Science.gov (United States) Buck, W. Roger; Lavier, Luc L.; Choi, Eunseo 2015-04-01 One of the best ways to constrain the strength of the Earth's lithosphere is to measure the deformation caused by large, well-defined loads. The largest, simple vertical load is that of the Hawaiian volcanic island chain. An impressively detailed recent analysis of the 3D response to that load by Zhong and Watts (2013) considers the depth range of seismicity below Hawaii and the seismically determined geometry of lithospheric deflection. These authors find that the friction coefficient for the lithosphere must be in the normal range measured for rocks, but conclude that the ductile flow strength has to be far weaker than laboratory measurements suggest. Specifically, Zhong and Watts (2013) find that stress differences in the mantle lithosphere below the island chain are less than about 200 MPa. Standard rheologic models suggest that for the ~50 km thick lithosphere inferred to exist below Hawaii yielding will occur at stress differences of about 1 GPa. Here we suggest that magmatic accommodation of flexural extension may explain Hawaiian lithospheric deflection even with standard mantle flow laws. Flexural stresses are extensional in the deeper part of the lithosphere below a linear island load (i.e. horizontal stresses orthogonal to the line load are lower than vertical stresses). Magma can accommodate lithospheric extension at smaller stress differences than brittle and ductile rock yielding. Dikes opening parallel to an island chain would allow easier downflexing than a continuous plate, but wound not produce a freely broken plate. The extensional stress needed to open dikes at depth depends on the density contrast between magma and lithosphere, assuming magma has an open pathway to the surface. For a uniform lithospheric density ρL and magma density ρM the stress difference to allow dikes to accommodate extension is: Δσxx (z) = g z (ρM - gρL), where g is the acceleration of gravity and z is depth below the surface. For reasonable density values (i.e. 12. Global map of lithosphere thermal thickness on a 1 deg x 1 deg grid - digitally available Science.gov (United States) Artemieva, Irina 2014-05-01 This presentation reports a 1 deg ×1 deg global thermal model for the continental lithosphere (TC1). The model is digitally available from the author's web-site: www.lithosphere.info. Geotherms for continental terranes of different ages (early Archean to present) are constrained by reliable data on borehole heat flow measurements (Artemieva and Mooney, 2001), checked with the original publications for data quality, and corrected for paleo-temperature effects where needed. These data are supplemented by cratonic geotherms based on xenolith data. Since heat flow measurements cover not more than half of the continents, the remaining areas (ca. 60% of the continents) are filled by the statistical numbers derived from the thermal model constrained by borehole data. Continental geotherms are statistically analyzed as a function of age and are used to estimate lithospheric temperatures in continental regions with no or low quality heat flow data. This analysis requires knowledge of lithosphere age globally. A compilation of tectono-thermal ages of lithospheric terranes on a 1 deg × 1 deg grid forms the basis for the statistical analysis. It shows that, statistically, lithospheric thermal thickness z (in km) depends on tectono-thermal age t (in Ma) as: z=0.04t+93.6. This relationship formed the basis for a global thermal model of the continental lithosphere (TC1). Statistical analysis of continental geotherms also reveals that this relationship holds for the Archean cratons in general, but not in detail. Particularly, thick (more than 250 km) lithosphere is restricted solely to young Archean terranes (3.0-2.6 Ga), while in old Archean cratons (3.6-3.0 Ga) lithospheric roots do not extend deeper than 200-220 km. The TC1 model is presented by a set of maps, which show significant thermal heterogeneity within continental upper mantle. The strongest lateral temperature variations (as large as 800 deg C) are typical of the shallow mantle (depth less than 100 km). A map of the 13. Lithospheric discontinuities beneath the U.S. Midcontinent - signatures of Proterozoic terrane accretion and failed rifting Science.gov (United States) Chen, Chen; Gilbert, Hersh; Fischer, Karen M.; Andronicos, Christopher L.; Pavlis, Gary L.; Hamburger, Michael W.; Marshak, Stephen; Larson, Timothy; Yang, Xiaotao 2018-01-01 Seismic discontinuities between the Moho and the inferred lithosphere-asthenosphere boundary (LAB) are known as mid-lithospheric discontinuities (MLDs) and have been ascribed to a variety of phenomena that are critical to understanding lithospheric growth and evolution. In this study, we used S-to-P converted waves recorded by the USArray Transportable Array and the OIINK (Ozarks-Illinois-Indiana-Kentucky) Flexible Array to investigate lithospheric structure beneath the central U.S. This region, a portion of North America's cratonic platform, provides an opportunity to explore how terrane accretion, cratonization, and subsequent rifting may have influenced lithospheric structure. The 3D common conversion point (CCP) volume produced by stacking back-projected Sp receiver functions reveals a general absence of negative converted phases at the depths of the LAB across much of the central U.S. This observation suggests a gradual velocity decrease between the lithosphere and asthenosphere. Within the lithosphere, the CCP stacks display negative arrivals at depths between 65 km and 125 km. We interpret these as MLDs resulting from the top of a layer of crystallized melts (sill-like igneous intrusions) or otherwise chemically modified lithosphere that is enriched in water and/or hydrous minerals. Chemical modification in this manner would cause a weak layer in the lithosphere that marks the MLDs. The depth and amplitude of negative MLD phases vary significantly both within and between the physiographic provinces of the midcontinent. Double, or overlapping, MLDs can be seen along Precambrian terrane boundaries and appear to result from stacked or imbricated lithospheric blocks. A prominent negative Sp phase can be clearly identified at 80 km depth within the Reelfoot Rift. This arrival aligns with the top of a zone of low shear-wave velocities, which suggests that it marks an unusually shallow seismic LAB for the midcontinent. This boundary would correspond to the top of a 14. Lateral heterogeneity and vertical stratification of cratonic lithospheric keels: examples from Europe, Siberia, and North America DEFF Research Database (Denmark) Artemieva, Irina; Cherepanova, Yulia; Herceg, Matija of the Precambrian lithosphere based on surface heat flow data, (ii) non-thermal part of upper mantle seismic velocity heterogeneity based on a joint analysis of thermal and seismic tomography data, and (iii) lithosphere density heterogeneity as constrained by free-board and satellite gravity data. The latter...... of the Gondwanaland does not presently exceed 250 km depth. An analysis of temperature-corrected seismic velocity structure indicates strong vertical and lateral heterogeneity of the cratonic lithospheric mantle, with a pronounced stratification in many Precambrian terranes; the latter is supported by xenolith data... 15. Lateral heterogeneity and vertical stratification of cratonic lithospheric keels: a case study of the Siberian craton DEFF Research Database (Denmark) Artemieva, Irina; Cherepanova, Yulia; Herceg, Matija 2014-01-01 by regional xenolith P-T arrays,lithosphere density heterogeneity as constrained by free-board and satellite gravity data, and the non-thermalpart of upper mantle seismic velocity heterogeneity based on joint analysis of thermal and seismic tomography data.Density structure of the cratonic lithosphere...... and strongly depleted lithospheric mantle of the Archean nuclei, particularly below the Anabar shield.Since we cannot identify the depth distribution of density anomalies, we complement the approach by seismicdata. An analysis of temperature-corrected seismic velocity structure indicates strong vertical... 16. Proceedings. first assembly of the latin-america and caribbean seismological commission - lacsc OpenAIRE Third Latin-American Congress of Seismology, Third Latin-American Congress of Seismol 2014-01-01 The Latin-American and Caribbean region is an area with a very complex tectonic setting, where stress and strain generated by the interaction of several lithospheric plates is being absorbed. Several regional fault systems, with moderate and high activity, represent a hazard for a significant part of the population (more than 500 million inhabitants). Given the recent developments in the mining and energy industries, a great deal of exploration has been focusing on this part of the world, and... 17. Facilitating open global data use in earthquake source modelling to improve geodetic and seismological approaches Science.gov (United States) Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Isken, Marius; Vasyura-Bathke, Hannes 2017-04-01 In the last few years impressive achievements have been made in improving inferences about earthquake sources by using InSAR (Interferometric Synthetic Aperture Radar) data. Several factors aided these developments. The open data basis of earthquake observations has expanded vastly with the two powerful Sentinel-1 SAR sensors up in space. Increasing computer power allows processing of large data sets for more detailed source models. Moreover, data inversion approaches for earthquake source inferences are becoming more advanced. By now data error propagation is widely implemented and the estimation of model uncertainties is a regular feature of reported optimum earthquake source models. Also, more regularly InSAR-derived surface displacements and seismological waveforms are combined, which requires finite rupture models instead of point-source approximations and layered medium models instead of homogeneous half-spaces. In other words the disciplinary differences in geodetic and seismological earthquake source modelling shrink towards common source-medium descriptions and a source near-field/far-field data point of view. We explore and facilitate the combination of InSAR-derived near-field static surface displacement maps and dynamic far-field seismological waveform data for global earthquake source inferences. We join in the community efforts with the particular goal to improve crustal earthquake source inferences in generally not well instrumented areas, where often only the global backbone observations of earthquakes are available provided by seismological broadband sensor networks and, since recently, by Sentinel-1 SAR acquisitions. We present our work on modelling standards for the combination of static and dynamic surface displacements in the source's near-field and far-field, e.g. on data and prediction error estimations as well as model uncertainty estimation. Rectangular dislocations and moment-tensor point sources are exchanged by simple planar finite 18. COST Action ES1401 TIDES: a European network on TIme DEpendent Seismology Science.gov (United States) Morelli, Andrea 2016-04-01 Using the full-length records of seismic events and background ambient noise, today seismology is going beyond still-life snapshots of the interior of the Earth, and look into time-dependent changes of its properties. Data availability has grown dramatically with the expansion of seismographic networks and data centers, so as to enable much more detailed and accurate analyses. COST Action ES1401 TIDES (TIme DEpendent Seismology; http://tides-cost.eu) aims at structuring the EU seismological community to enable development of data-intensive, time-dependent techniques for monitoring Earth active processes (e.g., earthquakes, volcanic eruptions, landslides, glacial earthquakes) as well as oil/gas reservoirs. The main structure of TIDES is organised around working groups on: Workflow integration of data and computing resources; Seismic interferometry and ambient noise; Forward problems and High-performance computing applications; Seismic tomography, full waveform inversion and uncertainties; Applications in the natural environment and industry. TIDES is an open network of European laboratories with complementary skills, and is organising a series of events - workshops and advanced training schools - as well as supporting short-duration scientific stays. The first advanced training school was held in Bertinoro (Italy) on June 2015, with attendance of about 100 participants from 20 European countries, was devoted to how to manage and model seismic data with modern tools. The next school, devoted to ambient noise, will be held in 2016 Portugal: the program will be announced at the time of this conference. TIDES will strengthen Europe's role in a critical field for natural hazards and natural resource management. 19. How citizen seismology is transforming rapid public earthquake information and interactions between seismologists and society Science.gov (United States) Bossu, Rémy; Steed, Robert; Mazet-Roux, Gilles; Roussel, Fréderic; Caroline, Etivant 2015-04-01 Historical earthquakes are only known to us through written recollections and so seismologists have a long experience of interpreting the reports of eyewitnesses, explaining probably why seismology has been a pioneer in crowdsourcing and citizen science. Today, Internet has been transforming this situation; It can be considered as the digital nervous system comprising of digital veins and intertwined sensors that capture the pulse of our planet in near real-time. How can both seismology and public could benefit from this new monitoring system? This paper will present the strategy implemented at Euro-Mediterranean Seismological Centre (EMSC) to leverage this new nervous system to detect and diagnose the impact of earthquakes within minutes rather than hours and how it transformed information systems and interactions with the public. We will show how social network monitoring and flashcrowds (massive website traffic increases on EMSC website) are used to automatically detect felt earthquakes before seismic detections, how damaged areas can me mapped through concomitant loss of Internet sessions (visitors being disconnected) and the benefit of collecting felt reports and geolocated pictures to further constrain rapid impact assessment of global earthquakes. We will also describe how public expectations within tens of seconds of ground shaking are at the basis of improved diversified information tools which integrate this user generated contents. A special attention will be given to LastQuake, the most complex and sophisticated Twitter QuakeBot, smartphone application and browser add-on, which deals with the only earthquakes that matter for the public: the felt and damaging earthquakes. In conclusion we will demonstrate that eyewitnesses are today real time earthquake sensors and active actors of rapid earthquake information. 20. Developing Federated Services within Seismology: IRIS' involvement in the CoopEUS Project Science.gov (United States) Ahern, T. K.; Trabant, C. M.; Stults, M. 2014-12-01 As a founding member of the CoopEUS initiative, IRIS Data Services has partnered with five data centers in Europe and the UC Berkeley (NCEDC) in the US to implement internationally standardized web services to access seismological data using identical methodologies. The International Federation of Digital Seismograph Networks (FDSN) holds commission status within IASPEI/IUGG and as such is the international body that governs data exchange formats and access protocols within seismology. The CoopEUS project involves IRIS and UNAVCO as part of the EarthScope project and the European collaborators are all members of the European Plate Observing System (EPOS). CoopEUS includes one work package that attempts to coordinate data access between EarthScope and EPOS facilities. IRIS has worked with its partners in the FDSN to develop and adopt three key international service standards within seismology. These include 1) fdsn-dataselect, a service that returns time series data in a variety of standard formats, 2) fdsn-station, a service that returns related metadata about a seismic station in stationXML format, and 3) fdsn-event, a service that returns information about earthquakes and other seismic events in QuakeML format. Currently the 5 European data centers supporting these services include the ORFEUS Data Centre in the Netherlands, the GFZ German Research Centre for Geosciences in Potsdam, Germany, ETH Zurich in Switzerland, INGV in Rome, Italy, and the RESIF Data Centre in Grenoble France. Presently these seven centres can all be accessed using standardized web services with identical service calls and returns results in standardized ways. IRIS is developing an IRIS federator that will allow a client to seamlessly access information across the federated centers. Details and current status of the IRIS Federator will be presented. 1. QuakeML: XML for Seismological Data Exchange and Resource Metadata Description Science.gov (United States) Euchner, F.; Schorlemmer, D.; Becker, J.; Heinloo, A.; Kästli, P.; Saul, J.; Weber, B.; QuakeML Working Group 2007-12-01 QuakeML is an XML-based data exchange format for seismology that is under development. Current collaborators are from ETH, GFZ, USC, USGS, IRIS DMC, EMSC, ORFEUS, and ISTI. QuakeML development was motivated by the lack of a widely accepted and well-documented data format that is applicable to a broad range of fields in seismology. The development team brings together expertise from communities dealing with analysis and creation of earthquake catalogs, distribution of seismic bulletins, and real-time processing of seismic data. Efforts to merge QuakeML with existing XML dialects are under way. The first release of QuakeML will cover a basic description of seismic events including picks, arrivals, amplitudes, magnitudes, origins, focal mechanisms, and moment tensors. Further extensions are in progress or planned, e.g., for macroseismic information, location probability density functions, slip distributions, and ground motion information. The QuakeML language definition is supplemented by a concept to provide resource metadata and facilitate metadata exchange between distributed data providers. For that purpose, we introduce unique, location-independent identifiers of seismological resources. As an application of QuakeML, ETH Zurich currently develops a Python-based seismicity analysis toolkit as a contribution to CSEP (Collaboratory for the Study of Earthquake Predictability). We follow a collaborative and transparent development approach along the lines of the procedures of the World Wide Web Consortium (W3C). QuakeML currently is in working draft status. The standard description will be subjected to a public Request for Comments (RFC) process and eventually reach the status of a recommendation. QuakeML can be found at http://www.quakeml.org. 2. QuakeML: status of the XML-based seismological data exchange format Directory of Open Access Journals (Sweden) Joachim Saul 2011-04-01 Full Text Available QuakeML is an XML-based data exchange standard for seismology that is in its fourth year of active community-driven development. Its development was motivated by the need to consolidate existing data formats for applications in statistical seismology, as well as setting a cutting-edge, community-agreed standard to foster interoperability of distributed infrastructures. The current release (version 1.2 is based on a public Request for Comments process and accounts for suggestions and comments provided by a broad international user community. QuakeML is designed as an umbrella schema under which several sub-packages are collected. The present scope of QuakeML 1.2 covers a basic description of seismic events including picks, arrivals, amplitudes, magnitudes, origins, focal mechanisms, and moment tensors. Work on additional packages (macroseismic information, ground motion, seismic inventory, and resource metadata has been started, but is at an early stage. Several applications based on the QuakeML data model have been created so far. Among these are earthquake catalog web services at the European Mediterranean Seismological Centre (EMSC, GNS Science, and the Southern California Earthquake Data Center (SCEDC, and QuakePy, an open-source Python-based seismicity analysis toolkit. Furthermore, QuakeML is being used in the SeisComP3 system from GFZ Potsdam, and in the Collaboratory for the Study of Earthquake Predictability (CSEP testing center installations, developed by Southern California Earthquake Center (SCEC. QuakeML is still under active and dynamic development. Further contributions from the community are crucial to its success and are highly welcome. 3. 20 year IRIS: impact on seismological research at home and abroad Science.gov (United States) van der Hilst, R. D. 2004-12-01 : The positive impact of IRIS, through its programs (GSN, PASSCAL, DMS, EO) and its workshops, on seismological research and community building can hardly be overestimated. The Data Management System has been very successful in bringing data to users for research and education anywhere in the world; it enables routine, and in many cases real time, analysis of massive amounts of waveform data for a spectacularly diverse range of studies. (I will give examples of surface wave tomography and inverse scattering studies of the core mantle boundary.) The support that PASSCAL provides for the planning and execution of field campaigns allows seismologists to shift attention from operational issues to exciting science, and the required data dissemination through DMS does not only result in tremendously valuable data sets but also contributes to community building through (international) collaboration. Europe, Australia, and Asia also have rich histories of network and portable array seismometry, and in many areas the cumulative station density exceeds that of North America (even, perhaps, with USArray). Moreover, in some cases, such as the use of temporary, roving arrays of broad band seismometers, activities overseas may have preceded and inspired developments in the US. However, the absence of effective central systems for management and dissemination of quality-controlled data has left many unique historical and regional data sets underutilized. This situation is changing, however. As an example I will mention the NERIES initiative to build a better infrastructure for seismological research and education in Europe. Apart from providing an example, through international collaboration IRIS can continue to play an important role in the improvement of the global seismological infrastructure. 4. A short history of Japanese historical seismology: past and the present Science.gov (United States) Matsu'ura, Ritsuko S. 2017-12-01 Since seismicity in Japan is fairly high, Japanese interest in historical seismicity can be traced back to the nineth century, only a few centuries after the formation of the ancient ruling state. A 1000 years later, 2 years earlier than the modern seismological society was founded, the research on historical seismology started in Japan in 1878. By the accumulation for the recent 140 years, the present Japanese seismologists can read many historical materials without reading cursive scripts. We have a convenient access to the historical information related to earthquakes, in the modern characters of 27,759 pages. We now have 214 epicenters of historical earthquakes from 599 ad to 1872. Among them, 134 events in the early modern period were assigned hypocentral depths and proper magnitudes. The intensity data of 8700 places by those events were estimated. These precise intensity data enabled us to compare the detailed source areas of pairs of repeated historical earthquakes, such as the 1703 Genroku earthquake with the 1923 Kanto earthquake, and the 1707 Hoei earthquake with the summation of the 1854 Ansei Tokai and Ansei Nankai earthquakes. It is revealed that the focal area of the former larger event cannot completely include those of the latter smaller earthquakes, although those were believed to be typical sets of characteristic interplate earthquakes at the Sagami trough and at the Nankai trough. Research on historical earthquakes is very important to assess the seismic hazard in the future. We still have one-fifth events of the early modern period to be analyzed in detail. The compilation of places experienced high intensities in the modern events is also necessary. For the ancient and medieval periods, many equivocal events are still left. The further advance of the interdisciplinary research on historical seismology is necessary. 5. Seismological and Geodynamic Monitoring Network in the "javakheti" Test Zone in the Southern Caucasus Science.gov (United States) Arakelyan, A.; Babayan, H.; Karakhanyan, A.; Durgaryan, R.; Basilaia, G.; Sokhadze, G.; Bidzinashvili, G. 2012-12-01 The Javakheti Highland located in the border region between Armenia and Georgia (sharing a border with Turkey) is an area in the Southern Caucasus of young Holocene-Quaternary volcanism and a region with convergence of a number of active faults. Issues related to the geometry, kinematics and slip-rate of these faults and assessment of their seismic hazard remain unclear in part due to the fragmentary nature of the studies carried out soley within the borders of each of the countries as opposed to region wide. In the frame of the ISTC A-1418 Project "Open network of scientific Centers for mitigation risk of natural hazards in the Southern Caucasus and Central Asia" the Javakheti Highland was selected as a trans-border test-zone. This designation allowed for the expansion and upgrading of the seismological and geodynamic monitoring networks under the auspices of several international projects (ISTC CSP-053 Project "Development of Communication System for seismic hazard situations in the Southern Caucasus and Central Asia", NATO SfP- 983284 Project "Caucasus Seismic Emergency Response") as well as through joint research programs with the National Taiwan University and Institute of Earth Sciences (IES, Taiwan), Universite Montpellier II (France) and Ecole et Observatoire des Sciences de la Terre-Université de Strasbourg (France). Studies of geodynamic processes, and seismicity of the region and their interaction have been carried out utilizing the newly established seismological and geodynamic monitoring networks and have served as a basis for the study of the geologic and tectonic structure . Upgrading and expansion of seismological and geodynamic networks required urgent solutions to the following tasks: Introduction of efficient online systems for information acquisition, accumulation and transmission (including sattelite systems) from permanent and temporary installed stations, Adoption of international standards for organization and management of databases in GIS 6. Prediction of the area affected by earthquake-induced landsliding based on seismological parameters Science.gov (United States) Marc, Odin; Meunier, Patrick; Hovius, Niels 2017-07-01 We present an analytical, seismologically consistent expression for the surface area of the region within which most landslides triggered by an earthquake are located (landslide distribution area). This expression is based on scaling laws relating seismic moment, source depth, and focal mechanism with ground shaking and fault rupture length and assumes a globally constant threshold of acceleration for onset of systematic mass wasting. The seismological assumptions are identical to those recently used to propose a seismologically consistent expression for the total volume and area of landslides triggered by an earthquake. To test the accuracy of the model we gathered geophysical information and estimates of the landslide distribution area for 83 earthquakes. To reduce uncertainties and inconsistencies in the estimation of the landslide distribution area, we propose an objective definition based on the shortest distance from the seismic wave emission line containing 95 % of the total landslide area. Without any empirical calibration the model explains 56 % of the variance in our dataset, and predicts 35 to 49 out of 83 cases within a factor of 2, depending on how we account for uncertainties on the seismic source depth. For most cases with comprehensive landslide inventories we show that our prediction compares well with the smallest region around the fault containing 95 % of the total landslide area. Aspects ignored by the model that could explain the residuals include local variations of the threshold of acceleration and processes modulating the surface ground shaking, such as the distribution of seismic energy release on the fault plane, the dynamic stress drop, and rupture directivity. Nevertheless, its simplicity and first-order accuracy suggest that the model can yield plausible and useful estimates of the landslide distribution area in near-real time, with earthquake parameters issued by standard detection routines. 7. Recent advance in polar seismology: Global impact of the International Polar Year Science.gov (United States) Kanao, Masaki; Zhao, Dapeng; Wiens, Douglas A.; Stutzmann, Éléonore 2015-03-01 The most exciting initiative for the recent polar studies was the International Polar Year (IPY) in 2007-2008. The IPY has witnessed a growing community of seismologists who have made considerable efforts to acquire high-quality data in polar regions. It also provided an excellent opportunity to make significant advances in seismic instrumentation of the polar regions to achieve scientific targets involving global issues. Taking these aspects into account, we organize and publish a special issue in Polar Science on the recent advance in polar seismology and cryoseismology as fruitful achievements of the IPY. 8. European seismological data exchange, access and processing: current status of the Research Infrastructure project NERIES Science.gov (United States) Giardini, D.; van Eck, T.; Bossu, R.; Wiemer, S. 2009-04-01 The EC Research infrastructure project NERIES, an Integrated Infrastructure Initiative in seismology for 2006-2010 has passed its mid-term point. We will present a short concise overview of the current state of the project, established cooperation with other European and global projects and the planning for the last year of the project. Earthquake data archiving and access within Europe has dramatically improved during the last two years. This concerns earthquake parameters, digital broadband and acceleration waveforms and historical data. The Virtual European Broadband Seismic Network (VEBSN) consists currently of more then 300 stations. A new distributed data archive concept, the European Integrated Waveform Data Archive (EIDA), has been implemented in Europe connecting the larger European seismological waveform data. Global standards for earthquake parameter data (QuakeML) and tomography models have been developed and are being established. Web application technology has been and is being developed to make a jump start to the next generation data services. A NERIES data portal provides a number of services testing the potential capacities of new open-source web technologies. Data application tools like shakemaps, lossmaps, site response estimation and tools for data processing and visualisation are currently available, although some of these tools are still in an alpha version. A European tomography reference model will be discussed at a special workshop in June 2009. Shakemaps, coherent with the NEIC application, are implemented in, among others, Turkey, Italy, Romania, Switzerland, several countries. The comprehensive site response software is being distributed and used both inside and outside the project. NERIES organises several workshops inviting both consortium and non-consortium participants and covering a wide range of subjects: ‘Seismological observatory operation tools', ‘Tomography', ‘Ocean bottom observatories', 'Site response software training 9. Effects of Seismological and Soil Parameters on Earthquake Energy demand in Level Ground Sand Deposits Science.gov (United States) nabili, sara; shahbazi majd, nafiseh 2013-04-01 Liquefaction has been a source of major damages during severe earthquakes. To evaluate this phenomenon there are several stress, strain and energy based approaches. Use of the energy method has been more focused by researchers due to its advantages with respect to other approaches. The use of the energy concept to define the liquefaction potential is validated through laboratory element and centrifuge tests as well as field studies. This approach is based on the hypothesis that pore pressure buildup is directly related to the dissipated energy in sands which is the accumulated areas between the stress-strain loops. Numerous investigations were performed to find a relationship which correlates the dissipated energy to the soil parameters, but there are not sufficient studies to relate this dissipated energy, known as demand energy, concurrently, to the seismological and the soil parameters. The aim of this paper is to investigate the dependency of the demand energy in sands to seismological and the soil parameters. To perform this task, an effective stress analysis has been executed using FLAC finite difference program. Finn model, which is a built-in constitutive model implemented in FLAC program, was utilized. Since an important stage to predict the liquefaction is the prediction of excess pore water pressure at a given point, a simple numerical framework is presented to assess its generation during a cyclic loading in a given centrifuge test. According to the results, predicted excess pore water pressures did not closely match to the measured excess pore water pressure values in the centrifuge test but they can be used in the numerical assessment of excess pore water pressure with an acceptable degree of preciseness. Subsequently, the centrifuge model was reanalyzed using several real earthquake acceleration records with different seismological parameters such as earthquake magnitude and Hypocentral distance. The accumulated energies (demand energy) dissipated in 10. Seismological database for Banat seismic region (Romania) - Part 1: The parametric earthquake catalogue International Nuclear Information System (INIS) Oros, E.; Popa, M.; Moldovan, I. A. 2008-01-01 The most comprehensive seismological database for Banat seismic region (Romania) has been achieved. This paper refers to the essential characteristics of the first component of this database, namely the Parametric Earthquakes Catalogue for the Banat Seismic Region (PECBSR). PECBSR comprises 7783 crustal earthquakes (3 ≤ h ≤ 25 km) with 0.4 ≤ M i ≥ 5.6 (M i is M L , M D , M S , M W , Mm and/or mb from compiled sources) occurred in the Banat region and its surroundings between years 1443 and 2006. Different magnitude scales were converted into moment magnitude scale, Mw. The completeness of PECBSR strongly depends on the time. (authors) 11. Evaluation results after seven years of operation for the permanent Hellenic Seismological Network of Crete (HSNC). Science.gov (United States) Vallianatos, F.; Hloupis, G.; Papadopoulos, I. 2012-04-01 The Hellenic arc and the adjacent areas of the Greek mainland are the most active in western Eurasia and some of the most seismically active zones of the world. The seismicity of South Aegean is extremely high and is characterised by the frequent occurrence of large shallow and intermediate depth earthquakes. Until 2004, the installed seismological stations from several providers (NOA, GEOFON, MEDNET) provide average interstation distance around 130km resulting to catalogues with minimum magnitude of completeness (Mc) equals to 3.7. Towards to the direction of providing dense and state of the art instrumental coverage of seismicity in the South Aegean, HSNC begun its operation in 2004. Today it consists of (12) permanent seismological stations equipped with short period and broadband seismographs coupled with 3rd generation 24bit data loggers as well as from (2) accelerographs . The addition of HSNC along with combined use of all the active networks in South Aegean area (NOA, GEOFON, AUTH) decrease the average interstation distance to 60km and provide catalogues with Mc≥3.2. Data transmission and telemetry is implemented by a hybrid network consisting of dedicated wired ADSL links as well as VSAT links by using a unique private satellite hub. Real time data spread over collaborating networks (AUTH) and laboratories (Department of Earth Science - UCL) while at the same time, events are appended automatically and manually to EMSC database. Additional value to the network is provided by means of prototype systems which deployed in-situ for the purposes of: a) Acquiring aftershock data in the minimum time after main event. This is a mobile seismological network called RaDeSeis (Rapid Deployment Seismological network) which consists of a central station acting also as the central communication hub and wifi coupled mobile stations. b) The development of dedicated hardware and software solutions for rapid installation times (around 1 hour for each station) leading to 12. 25 Years of Research in Earth Physics and One Century of Seismology in Romania International Nuclear Information System (INIS) Marmureanu, Gh. 2002-01-01 The conference '25 Years of Research in Earth Physics and One Century of Seismology in Romania' held at Bucharest, Romania on September 27-29, 2002 was structured as follows: 1. Keynote lectures (4 papers); Section 1 - Exchange of data and improvement of earthquake monitoring during the last 25 years (6 papers); Section 2 - Study of the seismic source (5 papers); Section 3 - Seismotectonics and geodynamics of the Carphato - Balkan area (16 papers); Section 4 - Seismic hazard assesment (14 papers); Section 5 - Earthquake prediction research (7 papers); Section 6 - Lessons from earthquake damage and policies for seismic risk mitigation (3 papers) 13. Silicate melt metasomatism in the lithospheric mantle beneath SW Poland Science.gov (United States) Puziewicz, Jacek; Matusiak-Małek, Magdalena; Ntaflos, Theodoros; Grégoire, Michel; Kukuła, Anna 2014-05-01 The xenoliths of peridotites representing the subcontinental lithospheric mantle (SCLM) beneath SW Poland and adjacent parts of Germany occur in the Cenozoic alkaline volcanic rocks. Our study is based on detailed characterization of xenoliths occurring in 7 locations (Steinberg in Upper Lusatia, Księginki, Pilchowice, Krzeniów, Wilcza Góra, Winna Góra and Lutynia in Lower Silesia). One of the two major lithologies occurring in the xenoliths, which we call the "B" lithology, comprises peridotites (typically harzburgites) with olivine containing from 90.5 to 84.0 mole % of forsterite. The harzburgites contain no clinopyroxene or are poor in that mineral (eg. in Krzeniów the group "B" harzburgites contain pfu in ortho-, and pfu in clinopyroxene). The exception are xenoliths from Księginki, which contain pyroxenes characterised by negative correlation between mg# and Al. The REE patterns of both ortho- and clinopyroxene in the group "B" peridotites suggest equilibration with silicate melt. The rocks of "B" lithology were formed due to alkaline silicate melt percolation in the depleted peridotitic protolith. The basaltic melts formed at high pressure are usually undersaturated in both ortho- and clinopyroxene at lower pressures (Kelemen et al. 1992). Because of cooling and dissolution of ortho- and clinopyroxene the melts change their composition and become saturated in one or both of those phases. Experimental results (e.g. Tursack & Liang 2012 and references therein) show that the same refers to alkaline basaltic silicate melts and that its reactive percolation in the peridotitic host leads to decrease of Mg/(Mg+Fe) ratios of olivine and pyroxenes. Thus, the variation of relative volumes of olivine and orthopyroxene as well as the decrease of mg# of rock-forming silicates is well explained by reactive melt percolation in the peridotitic protolith consisting of high mg# olivine and pyroxenes (in the area studied by us that protolith was characterised by olivine 14. High-Resolution Gravity Field Modeling for Mercury to Estimate Crust and Lithospheric Properties Science.gov (United States) Goossens, S.; Mazarico, E.; Genova, A.; James, P. B. 2018-05-01 We estimate a gravity field model for Mercury using line-of-sight data to improve the gravity field model at short wavelengths. This can be used to infer crustal density and infer the support mechanism of the lithosphere. 15. A seismic tomography study of lithospheric structure under the Norwegian Caledonides DEFF Research Database (Denmark) Hejrani, Babak; Jacobsen, B. H.; Balling, N. 2012-01-01 A deep lithospheric transition between southern Norway and southern Sweden has been revealed in papers by Medhus et al. (2009,) and Medhus (2010). This lithospheric transition is crossing various tectonic units including the Caledonides.. We address the question of whether this transition continu...... (Hejrani et al., 2011) (optimizes 2D ray coverage under a crooked profile) is used to resolve the details of the transition boundaries in lithosphere structure across the mountains and its relation to the geological surface settings....... in this area. These results are compared the upper mantle structure obtained by Medhus (2010) and Hejrani et al. (2011) for Caledonian and shield units to the south in southern Norway and Sweden, where the lithospheric transition follows the eastern margin of the Oslo Graben. Crooked line seismic tomography... 16. Global map of lithosphere thermal thickness on a 1 deg x 1 deg grid - digitally available DEFF Research Database (Denmark) Artemieva, Irina 2014-01-01 with no or low quality heat flow data. This analysis requires knowledge oflithosphere age globally.A compilation of tectono-thermal ages of lithospheric terranes on a 1 deg 1 deg grid forms the basis forthe statistical analysis. It shows that, statistically, lithospheric thermal thickness z (in km) depends......This presentation reports a 1 deg 1 deg global thermal model for the continental lithosphere (TC1). The modelis digitally available from the author’s web-site: www.lithosphere.info.Geotherms for continental terranes of different ages (early Archean to present) are constrained by reliabledata...... on borehole heat flow measurements (Artemieva and Mooney, 2001), checked with the original publicationsfor data quality, and corrected for paleo-temperature effects where needed. These data are supplemented bycratonic geotherms based on xenolith data.Since heat flow measurements cover not more than half... 17. Characterizing Lithospheric Thickness in Australia using Ps and Sp Scattered Waves Science.gov (United States) Ford, H. A.; Fischer, K. M.; Rychert, C. A. 2008-12-01 The purpose of this study is to constrain the morphology of the lithosphere-asthenosphere boundary throughout Australia using scattered waves. Prior surface wave studies have shown a correlation between lithospheric thickness and the three primary geologic provinces of Australia, with the shallowest lithosphere located beneath the Phanerozoic province to the east, and the thicker lithosphere located beneath the Proterozoic and Archean regions. To determine lithospheric thickness, waveform data from twenty permanent broadband stations spanning mainland Australia and the island of Tasmania were analyzed using Ps and Sp migration techniques. Waveform selection for each station was based on epicentral distance (35° to 80° for Ps and 55° to 80° for Sp), and event depth (no greater than 300 km for Sp). For both Ps and Sp a simultaneous deconvolution was performed on the data for each of the twenty stations, and the resulting receiver function for each station was migrated to depth. Data were binned with epicentral distance to differentiate direct discontinuity phases from crustal reverberations (for Ps) and other teleseismic arrivals (for Sp). Early results in both Ps and Sp show a clear Moho discontinuity at most stations in addition to sharp, strong crustal reverberations seen in many of the Ps images. In the eastern Phanerozoic province, a strong negative phase at 100-105 km is evident in Ps for stations CAN and EIDS. The negative phase lies within a depth range that corresponds to the negative velocity gradient between fast lithosphere and slow asthenosphere imaged by surface waves. We therefore think that it is the lithosphere- asthenosphere boundary. On the island of Tasmania, a negative phase at 70-75 km in Ps images at stations TAU and MOO also appears to be the lithosphere-asthenosphere boundary. In the Proterozoic and Archean regions of the Australian continent, initial results for both Ps and Sp migration indicate clear crustal phases, but significantly 18. Global maps of the magnetic thickness and magnetization of the Earth’s lithosphere OpenAIRE Foteini Vervelidou; Erwan Thébault 2015-01-01 We have constructed global maps of the large-scale magnetic thickness and magnetization of Earth’s lithosphere. Deriving such large-scale maps based on lithospheric magnetic field measurements faces the challenge of the masking effect of the core field. In this study, the maps were obtained through analyses in the spectral domain by means of a new regional spatial power spectrum based on the Revised Spherical Cap Harmonic Analysis (R-SCHA) formalism. A series of regional spectral analyses wer... 19. Deformation of the Pannonian lithosphere and related tectonic topography: a depth-to-surface analysis OpenAIRE Dombrádi, E. 2012-01-01 Fingerprints of deep-seated, lithospheric deformation are often recognised on the surface, contributing to topographic evolution, drainage organisation and mass transport. Interactions between deep and surface processes were investigated in the Carpathian-Pannonian region. The lithosphere beneath the Pannonian basin has formerly been extended, significantly stretched and heated up and thus became extremely weak from a rheological point of view. From Pliocene times onward the ‘crème brulee’ ty... 20. Amount of Asian lithospheric mantle subducted during the India/Asia collision OpenAIRE Replumaz, A.; Guillot, S.; Villaseñor, Antonio; Negredo, A. M. 2013-01-01 Body wave seismic tomography is a successful technique for mapping lithospheric material sinking into the mantle. Focusing on the India/Asia collision zone, we postulate the existence of several Asian continental slabs, based on seismic global tomography. We observe a lower mantle positive anomaly between 1100 and 900 km depths, that we interpret as the signature of a past subduction process of Asian lithosphere, based on the anomaly position relative to positive anomalies related to Indian c... 1. Migration of plutonium and americium in the lithosphere International Nuclear Information System (INIS) Fried, S.; Friedman, A.M.; Hines, J.J.; Atcher, R.W.; Quarterman, L.A.; Volesky, A. 1976-01-01 When radionuclides are stored as wastes either in permanent repositories or in waste storage areas, the possibility of escape into the environment must be considered. Surface contamination and the transport and migration of radionuclides into the lithosphere through the agency of water are discussed. Water in the form of rain will inevitably wash contaminants into soils and thence into conducting rocks. The migration of radionuclides must follow widely varying paths. In porous rocks, water percolates easily under a slight pressure gradient and rapid movement of large volumes of water can result with concommitant transport of large amounts of contaminating materials. In relatively non-porous rocks such as Niagara limestones the transport meets much more resistance and the volumes of water conducted are correspondingly reduced. In such situations much of the migration of water and its solutes may be through cracks and fissures in the rock. Certain strata of rock or rock products may be almost impervious to flow of water and by this token may be considered to be an especially suitable container for long term safe storage of nuclear wastes, particularly if these strata are quiescent. A series of investigations was undertaken to examine the properties of rocks in acting as a retarding agent in the migration of radionuclides. The rocks that are discussed are Niagara limestone (chosen for its density and fine porosity), basalt from the National Reactor Test site, and Los Alamos tuff 2. The Lithospheric Structure Beneath Canary Islands from Receiver Function Analysis Science.gov (United States) Martinez-Arevalo, C.; Mancilla, F.; Helffrich, G. R.; Garcia, A. 2009-12-01 The Canary Archipelago is located a few hundred kilometers off the western Moroccan coast, extending 450 km west-to-east. It is composed of seven main islands. All but one have been active in the last million years. The origin of the Canary Islands is not well established and local and regional geology features cannot be completely explained by the current models. The main aim of this study is to provide new data that help us to understand and constrain the archipelago's origin and tectonic evolution. The crustal structure under each station is obtained applying P-receiver function technique to the teleseismic P arrivals recorded by the broadband seismic network installed at the Canary Island by the Instituto Geográfico Nacional (IGN) and two temporary stations (MIDSEA and IRIS). We computed receiver functions using the Extended-Time Multitaper Frequency Domain Cross-Correlation Receiver Function (ET-MTRF) method. The results show that the crust is thicker, around 22 km, in the eastern islands (Fuerteventura and Lanzarote) than in the western ones (El Hierro, La Palma, Tenerife), around 17 km, with the exception of La Gomera island. This island, located in the west, exhibits similar crustal structure to Fuerteventura and Lanzarote. A discontinuity at 70-80 km, possibly the LAB (Lithosphere Asthenosphere Boundary) is clearly observed in all the stations. It appears that Moho depths do not track the LAB discontinuity. 3. Detachments of the subducted Indian continental lithosphere based on 3D finite-frequency tomographic images Science.gov (United States) Liang, X.; Tian, X.; Wang, M. 2017-12-01 Indian plate collided with Eurasian plate at 60 Ma and there are about 3000 km crustal shortening since the continental-continental collision. At least one third of the total amount of crustal shortening between Indian and Eurasian plates could not be accounted by thickened Tibetan crust and surface erosion. It will need a combination of possible transfer of lower crust to the mantle by eclogitization and lateral extrusion. Based on the lithosphere-asthenosphere boundary images beneath the Tibetan plateau, there is also at least the same amount deficit for lithospheric mantle subducted into upper/lower mantle or lateral extrusion with the crust. We have to recover a detailed Indian continental lithosphere image beneath the plateau in order to explain this deficit of mass budget. Combining the new teleseismic body waves recorded by SANDWICH passive seismic array with waveforms from several previous temporary seismic arrays, we carried out finite-frequency tomographic inversions to image three-dimensional velocity structures beneath southern and central Tibetan plateau to examine the possible image of subducted Indian lithosphere in the Tibetan upper mantle. We have recovered a continuous high velocity body in upper mantle and piece-wised high velocity anomalies in the mantle transition zone. Based on their geometry and relative locations, we interpreted these high velocity anomalies as the subducted and detached Indian lithosphere at different episodes of the plateau evolution. Detachments of the subducted Indian lithosphere should have a crucial impact on the volcanism activities and uplift history of the plateau. 4. Three-dimensional lithospheric density distribution of China and surrounding regions Directory of Open Access Journals (Sweden) Chuantao Li 2014-01-01 Full Text Available In this paper, we analyze lithospheric density distribution of China and surrounding regions on the basis of 30′ × 30′ gravity data and 1° × 1° P-wave velocity data. Firstly, we used the empirical equation between the density and the P-wave velocity difference as the base of the initial model of the Asian lithospheric density. Secondly, we calculated the gravity anomaly, caused by the Moho discontinuity and the sedimentary layer discontinuity, by the Parker formula. Thirdly, the gravity anomaly of the spherical harmonics with 2–40 order for the anomalous body below the lithosphere is calculated based on the model of EGM96. Finally, by using Algebra Reconstruction Techniques (ART, the inversion of 30′ × 30′ residual lithospheric Bouguer gravity anomaly caused by the lithosphere yields a rather detailed structural model. The results show that the lithospheric density distribution of China and surrounding regions has a certain connection with the tectonic structure. The density is relatively high in the Philippine Sea plate, Japan Sea, the Indian plate, the Kazakhstan shield and the Western Siberia plain, whereas the Tibetan Plateau has low-density characteristics. The minimum value of density lies in the north of Philippines, in the Taiwan province and in the Ryukyu island arc. 5. Peeling back the lithosphere: Controlling parameters, surface expressions and the future directions in delamination modeling Science.gov (United States) Göğüş, Oğuz H.; Ueda, Kosuke 2018-06-01 Geodynamical models investigate the rheological and physical properties of the lithosphere that peels back (delaminates) from the upper-middle crust. Meanwhile, model predictions are used to relate to a set of observations in the geological context to the test the validity of delamination. Here, we review numerical and analogue models of delamination from these perspectives and provide a number of first-order topics which future modeling studies may address. Models suggest that the presence of the weak lower crust that resides between the strong mantle lithosphere (at least 100 times more viscous/stronger) and the strong upper crust is necessary to develop delamination. Lower crustal weakening may be induced by melt infiltration, shear heating or it naturally occurs through the jelly sandwich type strength profile of the continental lithosphere. The negative buoyancy of the lithosphere required to facilitate the delamination is induced by the pre-existing ocean subduction and/or the lower crustal eclogitization. Surface expression of the peeling back lithosphere has a distinct transient and migratory imprint on the crust, resulting in rapid surface uplift/subsidence, magmatism, heating and shortening/extension. New generation of geodynamical experiments can explain how different types of melting (e.g hydrated, dry melting) occurs with delamination. Reformation of the lithosphere after removal, three dimensional aspects, and the termination of the process are key investigation areas for future research. The robust model predictions, as with other geodynamic modeling studies should be reconciled with observations. 6. The rheological structure of the lithosphere in the Eastern Marmara region, Turkey Science.gov (United States) Oruç, Bülent; Sönmez, Tuba 2017-05-01 The aim of this work is to propose the geometries of the crustal-lithospheric mantle boundary (Moho) and lithosphere-asthenosphere boundary (LAB) and the 1D thermal structure of the lithosphere, in order to establish a rheological model of the Eastern Marmara region. The average depths of Moho and LAB are respectively 35 km and 51 km from radially averaged amplitude spectra of EGM08 Bouguer anomalies. The geometries of Moho and LAB interfaces are estimated from the Parker-Oldenburg gravity inversion algorithm. Our results show the Moho depth varies from 31 km at the northern part of North Anatolian Fault Zone (NAFZ) to 39 km below the mountain belt in the southern part of the NAFZ. The depth to the LAB beneath the same parts of the region ranges from 45 km to 55 km. Having lithospheric strength and thermal boundary layer structure, we analyzed the conditions of development of lithosphere thinning. A two-dimensional strength profile has been estimated for rheology model of the study area. Thus we suggest that the rheological structure consists of a strong upper crust, a weak lower crust, and a partly molten upper lithospheric mantle. 7. Constraints on the Chemistry and Abundance of Hydrous Phases in Sub Continental Lithospheric Mantle: Implications for Mid-Lithospheric Discontinuities Science.gov (United States) Saha, S.; Dasgupta, R.; Fischer, K. M.; Mookherjee, M. 2017-12-01 The origins of a 2-10% reduction in seismic shear wave velocity (Vs) at depths of 60-160 km in sub continental lithospheric mantle (SCLM) regions, identified as the Mid Lithospheric Discontinuity (MLD) [e.g., 1] are highly debated [e.g., 2, 3]. One of the proposed explanations for MLDs is the presence of hydrous minerals such as amphibole and phlogopite at these depths [e.g., 2, 4, 5]. Although the stability and compositions of these phases in peridotite + H2O ± CO2 have been widely explored [e.g., 6], their composition and abundance as a function of permissible SCLM chemistry remain poorly understood. We have compiled phase equilibria experiments conducted over a range of pressure (0.5-8 GPa), temperature (680-1300 °C), major element peridotite compositions, and volatiles (H2O: 0.05-13.79 wt.% and CO2: 0.25-5.3 wt.%). The goal was to constrain how compositional parameters such as CaO and alkali/H2O affect the chemistry and abundance of amphibole and phlogopite. We observe that the abundance of amphibole increases with CaO content and decreasing alkali/H2O. The abundance of phlogopite varies directly with K2O content. Unlike phlogopite compositions that remain consistent, amphibole compositions show variability (pargastitic to K-richterite) depending on bulk CaO and Na2O. Mineral modes, obtained by mass balance on a melt/fluid free basis, were used to calculate aggregate shear wave velocity, Vs for the respective assemblages [e.g., 7] and compared with absolute values observed at MLD depths [e.g., 8]. Vs shows a strong inverse correlation with phlogopite and amphibole modes (particularly where phlogopite is absent). For the Mg# range of cratonic xenoliths, 5-10% phlogopite at MLD depths can match the observed Vs values, while CaO contents in cratonic xenoliths limit the amphibole abundance to 10%, which is lower than previous estimates based on heat flow calculations [e.g., 4]. The modes of hydrous and other phases and corresponding Vs values could be used to 8. Seismological and geological investigation for earthquake hazard in the Greater Accra Metropolitan Area International Nuclear Information System (INIS) Doku, M. S. 2013-07-01 A seismological and geological investigation for earthquake hazard in the Greater Accra Metropolitan Area was undertaken. The research was aimed at employing a methematical model to estimate the seismic stress for the study area by generating a complete, unified and harmonized earthquake catalogue spanning 1615 to 2012. Seismic events were souced from Leydecker, G. and P. Amponsah, (1986), Ambraseys and Adams, (1986), Amponsah (2008), Geological Survey Department, Accra, Ghana, Amponsah (2002), National Earthquake Information Service, United States Geological Survey, Denver, Colorado 80225, USA, the International Seismological Centre and the National Data Centre of the Ghana Atomic Energy Commission. Events occurring in the study area were used to create and Epicentral Intensity Map and a seismicity map of the study area after interpolation of missing seismic magnitudes. The least square method and the maximum likelihood estimation method were employed to evaluate b-values of 0.6 and 0.9 respectively for the study area. A thematic map of epicentral intensity superimposed on the geology of the study area was also developed to help understand the relationship between the virtually fractured, jointed and sheared geology and the seismic events. The results obtained are indicative of the fact that the stress level of GAMA has a telling effect on its seismicity and also the events are prevalents at fractured, jointed and sheared zones. (au) 9. The CTBTO Link to the database of the International Seismological Centre (ISC) Science.gov (United States) Bondar, I.; Storchak, D. A.; Dando, B.; Harris, J.; Di Giacomo, D. 2011-12-01 The CTBTO Link to the database of the International Seismological Centre (ISC) is a project to provide access to seismological data sets maintained by the ISC using specially designed interactive tools. The Link is open to National Data Centres and to the CTBTO. By means of graphical interfaces and database queries tailored to the needs of the monitoring community, the users are given access to a multitude of products. These include the ISC and ISS bulletins, covering the seismicity of the Earth since 1904; nuclear and chemical explosions; the EHB bulletin; the IASPEI Reference Event list (ground truth database); and the IDC Reviewed Event Bulletin. The searches are divided into three main categories: The Area Based Search (a spatio-temporal search based on the ISC Bulletin), the REB search (a spatio-temporal search based on specific events in the REB) and the IMS Station Based Search (a search for historical patterns in the reports of seismic stations close to a particular IMS seismic station). The outputs are HTML based web-pages with a simplified version of the ISC Bulletin showing the most relevant parameters with access to ISC, GT, EHB and REB Bulletins in IMS1.0 format for single or multiple events. The CTBTO Link offers a tool to view REB events in context within the historical seismicity, look at observations reported by non-IMS networks, and investigate station histories and residual patterns for stations registered in the International Seismographic Station Registry. 10. The GINGERino ring laser gyroscope, seismological observations at one year from the first light Science.gov (United States) Simonelli, Andreino; Belfi, Jacopo; Beverini, Nicolò; Di Virgilio, Angela; Carelli, Giorgio; Maccioni, Enrico; De Luca, Gaetano; Saccorotti, Gilberto 2016-04-01 The GINGERino ring laser gyroscope (RLG) is a new large observatory-class RLG located in Gran Sasso underground laboratory (LNGS), one national laboratory of the INFN (Istituto Nazionale di Fisica Nucleare). The GINGERino apparatus funded by INFN in the context of a larger project of fundamental physics is intended as a pathfinder instrument to reach the high sensitivity needed to observe general relativity effects; more details are found at the URL (https://web2.infn.it/GINGER/index.php/it/). The sensitivity reached by our instrument in the first year after the set up permitted us to acquire important seismological data of ground rotations during the transit of seismic waves generated by seisms at different epicentral distances. RLGs are in fact the best sensors for capturing the rotational motions associated with the transit of seismic waves, thanks to the optical measurement principle, these instruments are in fact insensitive to translations. Ground translations are recorded by two seismometers: a Nanometrics Trillium 240 s and Guralp CMG 3T 360 s, the first instrument is part of the national earthquake monitoring program of the Istituto Nazionale di Geofisica e Vulcanologia (INGV) and provides the ground translation data to be compared to the RLG rotational data. We report the waveforms and the seismological analysis of some seismic events recorded during our first year of activity inside the LNGS laboratory. 11. ObspyDMT: a Python toolbox for retrieving and processing large seismological data sets Directory of Open Access Journals (Sweden) K. Hosseini 2017-10-01 Full Text Available We present obspyDMT, a free, open-source software toolbox for the query, retrieval, processing and management of seismological data sets, including very large, heterogeneous and/or dynamically growing ones. ObspyDMT simplifies and speeds up user interaction with data centers, in more versatile ways than existing tools. The user is shielded from the complexities of interacting with different data centers and data exchange protocols and is provided with powerful diagnostic and plotting tools to check the retrieved data and metadata. While primarily a productivity tool for research seismologists and observatories, easy-to-use syntax and plotting functionality also make obspyDMT an effective teaching aid. Written in the Python programming language, it can be used as a stand-alone command-line tool (requiring no knowledge of Python or can be integrated as a module with other Python codes. It facilitates data archiving, preprocessing, instrument correction and quality control – routine but nontrivial tasks that can consume much user time. We describe obspyDMT's functionality, design and technical implementation, accompanied by an overview of its use cases. As an example of a typical problem encountered in seismogram preprocessing, we show how to check for inconsistencies in response files of two example stations. We also demonstrate the fully automated request, remote computation and retrieval of synthetic seismograms from the Synthetics Engine (Syngine web service of the Data Management Center (DMC at the Incorporated Research Institutions for Seismology (IRIS. 12. Urban Seismology: on the origin of earth vibrations within a city. Science.gov (United States) Díaz, Jordi; Ruiz, Mario; Sánchez-Pastor, Pilar S; Romero, Paula 2017-11-10 Urban seismology has become an active research field in the recent years, both with seismological objectives, as obtaining better microzonation maps in highly populated areas, and with engineering objectives, as the monitoring of traffic or the surveying of historical buildings. We analyze here the seismic records obtained by a broad-band seismic station installed in the ICTJA-CSIC institute, located near the center of Barcelona city. Although this station was installed to introduce visitors to earth science during science fairs and other dissemination events, the analysis of the data has allowed to infer results of interest for the scientific community. The main results include the evidence that urban seismometers can be used as a easy-to-use, robust monitoring tool for road traffic and subway activity inside the city. Seismic signals generated by different cultural activities, including rock concerts, fireworks or football games, can be detected and discriminated from its seismic properties. Beside the interest to understand the propagation of seismic waves generated by those rather particular sources, those earth shaking records provide a powerful tool to gain visibility in the mass media and hence have the opportunity to present earth sciences to a wider audience. 13. Coronal Seismology of Flare-Excited Standing Slow-Mode Waves Observed by SDO/AIA Science.gov (United States) Wang, Tongjiang; Ofman, Leon; Davila, Joseph M. 2016-05-01 Flare-excited longitudinal intensity oscillations in hot flaring loops have been recently detected by SDO/AIA in 94 and 131 Å bandpasses. Based on the interpretation in terms of a slow-mode wave, quantitative evidence of thermal conduction suppression in hot (>9 MK) loops has been obtained for the first time from measurements of the polytropic index and phase shift between the temperature and density perturbations (Wang et al. 2015, ApJL, 811, L13). This result has significant implications in two aspects. One is that the thermal conduction suppression suggests the need of greatly enhanced compressive viscosity to interpret the observed strong wave damping. The other is that the conduction suppression provides a reasonable mechanism for explaining the long-duration events where the thermal plasma is sustained well beyond the duration of impulsive hard X-ray bursts in many flares, for a time much longer than expected by the classical Spitzer conductive cooling. In this study, we model the observed standing slow-mode wave in Wang et al. (2015) using a 1D nonlinear MHD code. With the seismology-derived transport coefficients for thermal conduction and compressive viscosity, we successfully simulate the oscillation period and damping time of the observed waves. Based on the parametric study of the effect of thermal conduction suppression and viscosity enhancement on the observables, we discuss the inversion scheme for determining the energy transport coefficients by coronal seismology. 14. Citizen seismology in Taiwan: what went wrong and what is the future? Science.gov (United States) Chen, K. H.; Liang, W. T.; Wu, Y. F. 2017-12-01 Citizen seismology encourages the public involvement to data collection, analysis, and reporting, and has the potential to greatly improve the emergency response to seismic hazard. This of course, is important for scientific achievement due to the dense network. We believed the value of citizen seismology and started with distributing Quake-Catcher-Network (QCN) sensor at schools in Taiwan. While working with teachers, we hoped to motivate the learning of how to read seismograms, what to see in the data, and what to teach in the class. Through lots of workshops and activities, even with near-real time earthquake game competition and board game (quake-nopoly) developed along the way, we came to realize the huge gap between what people need and what we do. And to bridge the gap, a new generation of citizen seismic network is needed. Imagine at work, you receive the alarm from sensors at home that tells you the location, size, and type of anomalous shaking events in the neighborhood. Can this future "warning" system happen, allowing citizen to do emergence response? This is a story about facing the challenge, transforming the doubt of "why do I care" to a future IoT world. 15. ObspyDMT: a Python toolbox for retrieving and processing large seismological data sets Science.gov (United States) Hosseini, Kasra; Sigloch, Karin 2017-10-01 We present obspyDMT, a free, open-source software toolbox for the query, retrieval, processing and management of seismological data sets, including very large, heterogeneous and/or dynamically growing ones. ObspyDMT simplifies and speeds up user interaction with data centers, in more versatile ways than existing tools. The user is shielded from the complexities of interacting with different data centers and data exchange protocols and is provided with powerful diagnostic and plotting tools to check the retrieved data and metadata. While primarily a productivity tool for research seismologists and observatories, easy-to-use syntax and plotting functionality also make obspyDMT an effective teaching aid. Written in the Python programming language, it can be used as a stand-alone command-line tool (requiring no knowledge of Python) or can be integrated as a module with other Python codes. It facilitates data archiving, preprocessing, instrument correction and quality control - routine but nontrivial tasks that can consume much user time. We describe obspyDMT's functionality, design and technical implementation, accompanied by an overview of its use cases. As an example of a typical problem encountered in seismogram preprocessing, we show how to check for inconsistencies in response files of two example stations. We also demonstrate the fully automated request, remote computation and retrieval of synthetic seismograms from the Synthetics Engine (Syngine) web service of the Data Management Center (DMC) at the Incorporated Research Institutions for Seismology (IRIS). 16. Super-large optical gyroscopes for applications in geodesy and seismology: state-of-the-art and development prospects International Nuclear Information System (INIS) Velikoseltsev, A A; Luk'yanov, D P; Vinogradov, V I; Shreiber, K U 2014-01-01 A brief survey of the history of the invention and development of super-large laser gyroscopes (SLLGs) is presented. The basic results achieved using SLLGs in geodesy, seismology, fundamental physics and other fields are summarised. The concept of SLLG design, specific features of construction and implementation are considered, as well as the prospects of applying the present-day optical technologies to laser gyroscope engineering. The possibilities of using fibre-optical gyroscopes in seismologic studies are analysed and the results of preliminary experimental studies are presented. (laser gyroscopes) 17. Super-large optical gyroscopes for applications in geodesy and seismology: state-of-the-art and development prospects Energy Technology Data Exchange (ETDEWEB) Velikoseltsev, A A; Luk' yanov, D P [St. Petersburg Electrotechnical University ' ' LETI' ' , St. Petersburg (Russian Federation); Vinogradov, V I [OJSC Tambov factory Elektropribor (Russian Federation); Shreiber, K U [Forschungseinrichtung Satellitengeodaesie, Technosche Universitaet Muenchen, Geodaetisches Observatorium Wettzell, Sackenrieder str. 25, 93444 Bad Koetzting (Germany) 2014-12-31 A brief survey of the history of the invention and development of super-large laser gyroscopes (SLLGs) is presented. The basic results achieved using SLLGs in geodesy, seismology, fundamental physics and other fields are summarised. The concept of SLLG design, specific features of construction and implementation are considered, as well as the prospects of applying the present-day optical technologies to laser gyroscope engineering. The possibilities of using fibre-optical gyroscopes in seismologic studies are analysed and the results of preliminary experimental studies are presented. (laser gyroscopes) 18. Real-time GPS seismology using a single receiver: method comparison, error analysis and precision validation Science.gov (United States) Li, Xingxing 2014-05-01 Earthquake monitoring and early warning system for hazard assessment and mitigation has traditional been based on seismic instruments. However, for large seismic events, it is difficult for traditional seismic instruments to produce accurate and reliable displacements because of the saturation of broadband seismometers and problematic integration of strong-motion data. Compared with the traditional seismic instruments, GPS can measure arbitrarily large dynamic displacements without saturation, making them particularly valuable in case of large earthquakes and tsunamis. GPS relative positioning approach is usually adopted to estimate seismic displacements since centimeter-level accuracy can be achieved in real-time by processing double-differenced carrier-phase observables. However, relative positioning method requires a local reference station, which might itself be displaced during a large seismic event, resulting in misleading GPS analysis results. Meanwhile, the relative/network approach is time-consuming, particularly difficult for the simultaneous and real-time analysis of GPS data from hundreds or thousands of ground stations. In recent years, several single-receiver approaches for real-time GPS seismology, which can overcome the reference station problem of the relative positioning approach, have been successfully developed and applied to GPS seismology. One available method is real-time precise point positioning (PPP) relied on precise satellite orbit and clock products. However, real-time PPP needs a long (re)convergence period, of about thirty minutes, to resolve integer phase ambiguities and achieve centimeter-level accuracy. In comparison with PPP, Colosimo et al. (2011) proposed a variometric approach to determine the change of position between two adjacent epochs, and then displacements are obtained by a single integration of the delta positions. This approach does not suffer from convergence process, but the single integration from delta positions to 19. Preliminary three-dimensional model of mantle convection with deformable, mobile continental lithosphere Science.gov (United States) Yoshida, Masaki 2010-06-01 Characteristic tectonic structures such as young orogenic belts and suture zones in a continent are expected to be mechanically weaker than the stable part of the continental lithosphere with the cratonic root (or cratonic lithosphere) and yield lateral viscosity variations in the continental lithosphere. In the present-day Earth's lithosphere, the pre-existing, mechanically weak zones emerge as a diffuse plate boundary. However, the dynamic role of a weak (low-viscosity) continental margin (WCM) in the stability of continental lithosphere has not been understood in terms of geophysics. Here, a new numerical simulation model of mantle convection with a compositionally and rheologically heterogeneous, deformable, mobile continental lithosphere is presented for the first time by using three-dimensional regional spherical-shell geometry. A compositionally buoyant and highly viscous continental assemblage with pre-existing WCMs, analogous to the past supercontinent, is modeled and imposed on well-developed mantle convection whose vigor of convection, internal heating rate, and rheological parameters are appropriate for the Earth's mantle. The visco-plastic oceanic lithosphere and the associated subduction of oceanic plates are incorporated. The time integration of the advection of continental materials with zero chemical diffusion is performed by a tracer particle method. The time evolution of mantle convection after setting the model supercontinent is followed over 800 Myr. Earth-like continental drift is successfully reproduced, and the characteristic thermal interaction between the mantle and the continent/supercontinent is observed in my new numerical model. Results reveal that the WCM protects the cratonic lithosphere from being stretched by the convecting mantle and may play a significant role in the stability of the cratonic lithosphere during the geological timescale because it acts as a buffer that prevents the cratonic lithosphere from undergoing global 20. Lithospheric-scale analogue modelling of collision zones with a pre-existing weak zone, in "Deformation Mechanisms, Rheology and Tectonics: from Minerals to the Lithosphere" NARCIS (Netherlands) Willingshofer, E.; Sokoutis, D.; Burg, J.P. 2005-01-01 Lithospheric-scale analogue experiments have been conducted to investigate the influence of strength heterogeneities on the distribution and mode of crustal-scale deformation, on the resulting geometry of the deformed area, and on its topographic expression. Strength heterogeneities were 1. Lithospheric deformation inferred from electrical anisotropy of magnetotelluric data Science.gov (United States) Yin, Y.; Wei, W.; Jin, S.; Ye, G.; Unsworth, M. J.; Zhang, L. 2013-12-01 In our research, a comprehensive procedure of analyzing and modeling electrical anisotropy for MT data is suggested, based on the field examples of the Great Slave Lake shear zone (GSLsz) in western Canada, the North China Craton (NCC) and the Altyn Tagh fault in northern Tibet. Diverse dimensionality tools are used to distinguish heterogeneity and anisotropy from MT data. In addition to the phase splits and phase tensor polarizations, a combination of the phase tensor and induction arrows is applied to judge anisotropy. The skin depths of specific period band are considered to determine whether these features result from anisotropy or heterogeneity. Specific resistivity structures in the 2-D isotropic inversion models can indicate electrical anisotropy as well, like the dike-like media or a series of conductive ';blobs' can be observed in the 2-D isotropic inversion models of the GSLsz and NCC data. Anisotropic inversions can be undertaken using an improved inversion code based on isotropic code but incorporating a trade-off parameter for electrical anisotropy named anisotropic tau. A series of anisotropic tau have been applied to test its effect and to get a best trade-off between anisotropy and heterogeneity. Then, 2-D and 3-D forward modeling works are undertaken to test the robustness of the major anisotropic features. The anisotropic structures inferred from the inversion models are replaced by various alternating isotropic or anisotropic structures to see if they are required. The fitting of the response curves compared with the field data and corresponding r.m.s misfits can help us choose the best model that can generally illustrate the underground structure. Finally, the analysis and modeling result of the MT data from North China Craton is taken as an example to demonstrate how the electrical anisotropy can be linked with the lithospheric deformation. According to the reliable models we got, there may be an anisotropic layer at the mid-lower crustal to 2. Double subduction of continental lithosphere, a key to form wide plateau Science.gov (United States) Replumaz, Anne; Funiciello, Francesca; Reitano, Riccardo; Faccenna, Claudio; Balon, Marie 2016-04-01 The mechanisms involved in the creation of the high and wide topography, like the Tibetan Plateau, are still controversial. In particular, the behaviour of the indian and asian lower continental lithosphere during the collision is a matter of debate, either thickening, densifying and delaminating, or keeping its rigidity and subducting. But since several decades seismicity, seismic profiles and global tomography highlight the lithospheric structure of the Tibetan Plateau, and make the hypotheses sustaining the models more precise. In particular, in the western syntaxis, it is now clear that the indian lithosphere subducts northward beneath the Hindu Kush down to the transition zone, while the asian one subducts southward beneath Pamir (e.g. Negredo et al., 2007; Kufner et al., 2015). Such double subduction of continental lithospheres with opposite vergence has also been inferred in the early collision time. Cenozoic volcanic rocks between 50 and 30 Ma in the Qiangtang block have been interpreted as related to an asian subduction beneath Qiangtang at that time (De Celles et al., 2011; Guillot and Replumaz, 2013). We present here analogue experiments silicone/honey to explore the subduction of continental lithosphere, using a piston as analogue of far field forces. We explore the parameters that control the subductions dynamics of the 2 continental lithospheres and the thickening of the plates at the surface, and compare with the Tibetan Plateau evolution. We show that a continental lithosphere is able to subduct in a collision context, even lighter than the mantle, if the plate is rigid enough. In that case the horizontal force due to the collision context, modelled by the piston push transmitted by the indenter, is the driving force, not the slab pull which is negative. It is not a subduction driving by the weight of the slab, but a subduction induced by the collision, that we could call "collisional subduction". 3. Lithospheric thermal-rheological structure of the Ordos Basin and its geodynamics Science.gov (United States) Pan, J.; Huang, F.; He, L.; Wu, Q. 2015-12-01 The study on the destruction of the North China Craton has always been one of the hottest issues in earth sciences.Both mechanism and spatial variation are debated fiercely, still unclear.However, geothermal research on the subject is relatively few. Ordos Basin, located in the west of the North China Craton, is a typical intraplate. Based on two-dimensional thermal modeling along a profile across Ordos Basin from east to west, obtained the lithospheric thermal structure and rheology. Mantle heat flow in different regions of Ordos Basin is from 21.2 to 24.5 mW/m2. In the east mantle heat flow is higher while heat flow in western region is relatively low. But mantle heat flow is smooth and low overall, showing a stable thermal background. Ratio of crustal and mantle heat flow is between 1.51 and 1.84, indicating that thermal contribution from shallow crust is lower than that from the mantle. Rheological characteristics along the profile are almost showed as "jelly sandwich" model and stable continental lithosphere structure,which is represent by a weak crust portion but a strong lithospheric mantle portion in vertical strength profile. Based on above , both thermal structure and lithospheric rheology of Ordos Basin illustrate that tectonic dynamics environment in the west of North China Craton is relatively stable. By the study on lithospheric thermal structure, we focus on the disparity in thickness between the thermal lithosphere and seismic lithosphere.The difference in western Ordos Basin is about 140km, which decreases gradually from Fenwei graben in the eastern Ordos Basin to the Bohai Bay Basin.That is to say the difference decreases gradually from the west to the east of North China Craton.The simulation results imply that viscosity of the asthenosphere under North China Craton also decreases gradually from west to east, confirming that dehydration of the Pacific subduction is likely to have great effect on the North China Craton. 4. Effects of upper mantle heterogeneities on the lithospheric stress field and dynamic topography Science.gov (United States) Osei Tutu, Anthony; Steinberger, Bernhard; Sobolev, Stephan V.; Rogozhina, Irina; Popov, Anton A. 2018-05-01 The orientation and tectonic regime of the observed crustal/lithospheric stress field contribute to our knowledge of different deformation processes occurring within the Earth's crust and lithosphere. In this study, we analyze the influence of the thermal and density structure of the upper mantle on the lithospheric stress field and topography. We use a 3-D lithosphere-asthenosphere numerical model with power-law rheology, coupled to a spectral mantle flow code at 300 km depth. Our results are validated against the World Stress Map 2016 (WSM2016) and the observation-based residual topography. We derive the upper mantle thermal structure from either a heat flow model combined with a seafloor age model (TM1) or a global S-wave velocity model (TM2). We show that lateral density heterogeneities in the upper 300 km have a limited influence on the modeled horizontal stress field as opposed to the resulting dynamic topography that appears more sensitive to such heterogeneities. The modeled stress field directions, using only the mantle heterogeneities below 300 km, are not perturbed much when the effects of lithosphere and crust above 300 km are added. In contrast, modeled stress magnitudes and dynamic topography are to a greater extent controlled by the upper mantle density structure. After correction for the chemical depletion of continents, the TM2 model leads to a much better fit with the observed residual topography giving a good correlation of 0.51 in continents, but this correction leads to no significant improvement of the fit between the WSM2016 and the resulting lithosphere stresses. In continental regions with abundant heat flow data, TM1 results in relatively small angular misfits. For example, in western Europe the misfit between the modeled and observation-based stress is 18.3°. Our findings emphasize that the relative contributions coming from shallow and deep mantle dynamic forces are quite different for the lithospheric stress field and dynamic 5. Interaction between mantle and crustal detachments: a non-linear system controlling lithospheric extension Science.gov (United States) Rosenbaum, G.; Regenauer-Lieb, K.; Weinberg, R. F. 2009-12-01 We use numerical modelling to investigate the development of crustal and mantle detachment faults during lithospheric extension. Our models simulate a wide range of rift systems with varying values of crustal thickness and heat flow, showing how strain localization in the mantle interacts with localization in the upper crust and controls the evolution of extensional systems. Model results reveal a richness of structures and deformation styles, which grow in response to a self-organized mechanism that minimizes the internal stored energy of the system by localizing deformation at different levels of the lithosphere. Crustal detachment faults are well developed during extension of overthickened (60 km) continental crust, even when the initial heat flow is relatively low (50 mW/m2). In contrast, localized mantle deformation is most pronounced when the extended lithosphere has a normal crustal thickness (30-40 km) and an intermediate (60-70 mW/m2) heat flow. Results show a non-linear response to subtle changes in crustal thickness or heat flow, characterized by abrupt and sometime unexpected switches in extension modes (e.g. from diffuse rifting to effective lithospheric-scale rupturing) or from mantle- to crust-dominated strain localization. We interpret this non-linearity to result from the interference of doming wavelengths. Disharmony of crust and mantle doming wavelengths results in efficient communication between shear zones at different lithospheric levels, leading to rupturing of the whole lithosphere. In contrast, harmonious crust and mantle doming inhibits interaction of shear zones across the lithosphere and results in a prolonged rifting history prior to continental breakup. 6. Lithospheric thickness jumps at the S-Atlantic continental margins from satellite gravity data and modelled isostatic anomalies Science.gov (United States) Shahraki, Meysam; Schmeling, Harro; Haas, Peter 2018-01-01 Isostatic equilibrium is a good approximation for passive continental margins. In these regions, geoid anomalies are proportional to the local dipole moment of density-depth distributions, which can be used to constrain the amount of oceanic to continental lithospheric thickening (lithospheric jumps). We consider a five- or three-layer 1D model for the oceanic and continental lithosphere, respectively, composed of water, a sediment layer (both for the oceanic case), the crust, the mantle lithosphere and the asthenosphere. The mantle lithosphere is defined by a mantle density, which is a function of temperature and composition, due to melt depletion. In addition, a depth-dependent sediment density associated with compaction and ocean floor variation is adopted. We analyzed satellite derived geoid data and, after filtering, extracted typical averaged profiles across the Western and Eastern passive margins of the South Atlantic. They show geoid jumps of 8.1 m and 7.0 m for the Argentinian and African sides, respectively. Together with topography data and an averaged crustal density at the conjugate margins these jumps are interpreted as isostatic geoid anomalies and yield best-fitting crustal and lithospheric thicknesses. In a grid search approach five parameters are systematically varied, namely the thicknesses of the sediment layer, the oceanic and continental crusts and the oceanic and the continental mantle lithosphere. The set of successful models reveals a clear asymmetry between the South Africa and Argentine lithospheres by 15 km. Preferred models predict a sediment layer at the Argentine margin of 3-6 km and at the South Africa margin of 1-2.5 km. Moreover, we derived a linear relationship between, oceanic lithosphere, sediment thickness and lithospheric jumps at the South Atlantic margins. It suggests that the continental lithospheres on the western and eastern South Atlantic are thicker by 45-70 and 60-80 km than the oceanic lithospheres, respectively. 7. Crustal Models Assessment in Western Part of Romania Employing Active Seismic and Seismologic Methods Science.gov (United States) Bala, Andrei; Toma-Danila, Dragos; Tataru, Dragos; Grecu, Bogdan 2017-12-01 In the years 1999 - 2000 two regional seismic refraction lines were performed within a close cooperation with German partners from University of Karlsruhe. One of these lines is Vrancea 2001, with 420 km in length, almost half of them recorded in Transylvanian Basin. The structure of the crust along the seismic line revealed a very complicated crustal structure beginning with Eastern Carpathians and continuing in the Transylvanian Basin until Medias. As a result of the development of the National Seismic Network in the last ten years, more than 100 permanent broadband stations are now continuously operating in Romania. Complementary to this national dataset, maintained and developed in the National Institute for Earth Physics, new data emerged from the temporary seismologic networks established during the joint projects with European partners in the last decades. The data gathered so far is valuable both for seismology purposes and crustal structure studies, especially for the western part of the country, where this kind of data were sparse until now. Between 2009 and 2011, a new reference model for the Earth’s crust and mantle of the European Plate was defined through the NERIES project from existing data and models. The database gathered from different kind of measurements in Transylvanian Basin and eastern Pannonian Basin were included in this NERIES model and an improved and upgraded model of the Earth crust emerged for western part of Romania. Although the dataset has its origins in several periods over the last 50 years, the results are homogeneous and they improve and strengthen our image about the depth of the principal boundaries in the crust. In the last chapter two maps regarding these boundaries are constructed, one for mid-crustal boundary and one for Moho. They were build considering all the punctual information available from different sources in active seismic and seismology which are introduced in the general maps from the NERIES project for 8. Adapting Controlled-source Coherence Analysis to Dense Array Data in Earthquake Seismology Science.gov (United States) Schwarz, B.; Sigloch, K.; Nissen-Meyer, T. 2017-12-01 Exploration seismology deals with highly coherent wave fields generated by repeatable controlled sources and recorded by dense receiver arrays, whose geometry is tailored to back-scattered energy normally neglected in earthquake seismology. Owing to these favorable conditions, stacking and coherence analysis are routinely employed to suppress incoherent noise and regularize the data, thereby strongly contributing to the success of subsequent processing steps, including migration for the imaging of back-scattering interfaces or waveform tomography for the inversion of velocity structure. Attempts have been made to utilize wave field coherence on the length scales of passive-source seismology, e.g. for the imaging of transition-zone discontinuities or the core-mantle-boundary using reflected precursors. Results are however often deteriorated due to the sparse station coverage and interference of faint back-scattered with transmitted phases. USArray sampled wave fields generated by earthquake sources at an unprecedented density and similar array deployments are ongoing or planned in Alaska, the Alps and Canada. This makes the local coherence of earthquake data an increasingly valuable resource to exploit.Building on the experience in controlled-source surveys, we aim to extend the well-established concept of beam-forming to the richer toolbox that is nowadays used in seismic exploration. We suggest adapted strategies for local data coherence analysis, where summation is performed with operators that extract the local slope and curvature of wave fronts emerging at the receiver array. Besides estimating wave front properties, we demonstrate that the inherent data summation can also be used to generate virtual station responses at intermediate locations where no actual deployment was performed. Owing to the fact that stacking acts as a directional filter, interfering coherent wave fields can be efficiently separated from each other by means of coherent subtraction. We 9. Laboratory-based Interpretation of Seismological Models: Dealing with Incomplete or Incompatible Experimental Data (Invited) Science.gov (United States) Jackson, I.; Kennett, B. L.; Faul, U. H. 2009-12-01 In parallel with cooperative developments in seismology during the past 25 years, there have been phenomenal advances in mineral/rock physics making laboratory-based interpretation of seismological models increasingly useful. However, the assimilation of diverse experimental data into a physically sound framework for seismological application is not without its challenges as demonstrated by two examples. In the first example, that of equation-of-state and elasticity data, an appropriate, thermodynamically consistent framework involves finite-strain expansion of the Helmholz free energy incorporating the Debye approximation to the lattice vibrational energy, as advocated by Stixrude and Lithgow-Bertelloni. Within this context, pressure, specific heat and entropy, thermal expansion, elastic constants and their adiabatic and isothermal pressure derivatives are all calculable without further approximation in an internally consistent manner. The opportunities and challenges of assimilating a wide range of sometimes marginally incompatible experimental data into a single model of this type will be demonstrated with reference to MgO, unquestionably the most thoroughly studied mantle mineral. A neighbourhood-algorithm inversion has identified a broadly satisfactory model, but uncertainties in key parameters associated particularly with pressure calibration remain sufficiently large as to preclude definitive conclusions concerning lower-mantle chemical composition and departures from adiabaticity. The second example is the much less complete dataset concerning seismic-wave dispersion and attenuation emerging from low-frequency forced-oscillation experiments. Significant progress has been made during the past decade towards an understanding of high-temperature, micro-strain viscoelastic relaxation in upper-mantle materials, especially as regards the roles of oscillation period, temperature, grain size and melt fraction. However, the influence of other potentially important 10. Seismology of Giant Planets: General Overview and Results from the Kepler K2 Observations of Neptune Directory of Open Access Journals (Sweden) Gaulme Patrick 2017-01-01 Full Text Available For this invited contribution, I was asked to give an overview about the application of helio and aster-oseismic techniques to study the interior of giant planets, and to specifically present the recent observations of Neptune by Kepler K2. Seismology applied to giant planets could drastically change our understanding of their deep interiors, as it has happened with the Earth, the Sun, and many main-sequence and evolved stars. The study of giant planets' composition is important for understanding both the mechanisms enabling their formation and the origins of planetary systems, in particular our own. Unfortunately, its determination is complicated by the fact that their interior is thought not to be homogeneous, so that spectroscopic determinations of atmospheric abundances are probably not representative of the planet as a whole. Instead, the determination of their composition and structure must rely on indirect measurements and interior models. Giant planets are mostly fluid and convective, which makes their seismology much closer to that of solar-like stars than that of terrestrial planets. Hence, helioseismology techniques naturally transfer to giant planets. In addition, two alternative methods can be used: photometry of the solar light reflected by planetary atmospheres, and ring seismology in the specific case of Saturn. The current decade has been promising thanks to the detection of Jupiter's acoustic oscillations with the ground-based imaging-spectrometer SYMPA and indirect detection of Saturn's f-modes in its rings by the NASA Cassini orbiter. This has motivated new projects of ground-based and space-borne instruments that are under development. The K2 observations represented the first opportunity to search for planetary oscillations with visible photometry. Despite the excellent quality of K2 data, the noise level of the power spectrum of the light curve was not low enough to detect Neptune's oscillations. The main results from the 11. Seismology of Giant Planets: General Overview and Results from the Kepler K2 Observations of Neptune Science.gov (United States) Gaulme, Patrick 2017-10-01 For this invited contribution, I was asked to give an overview about the application of helio and aster-oseismic techniques to study the interior of giant planets, and to specifically present the recent observations of Neptune by Kepler K2. Seismology applied to giant planets could drastically change our understanding of their deep interiors, as it has happened with the Earth, the Sun, and many main-sequence and evolved stars. The study of giant planets' composition is important for understanding both the mechanisms enabling their formation and the origins of planetary systems, in particular our own. Unfortunately, its determination is complicated by the fact that their interior is thought not to be homogeneous, so that spectroscopic determinations of atmospheric abundances are probably not representative of the planet as a whole. Instead, the determination of their composition and structure must rely on indirect measurements and interior models. Giant planets are mostly fluid and convective, which makes their seismology much closer to that of solar-like stars than that of terrestrial planets. Hence, helioseismology techniques naturally transfer to giant planets. In addition, two alternative methods can be used: photometry of the solar light reflected by planetary atmospheres, and ring seismology in the specific case of Saturn. The current decade has been promising thanks to the detection of Jupiter's acoustic oscillations with the ground-based imaging-spectrometer SYMPA and indirect detection of Saturn's f-modes in its rings by the NASA Cassini orbiter. This has motivated new projects of ground-based and space-borne instruments that are under development. The K2 observations represented the first opportunity to search for planetary oscillations with visible photometry. Despite the excellent quality of K2 data, the noise level of the power spectrum of the light curve was not low enough to detect Neptune's oscillations. The main results from the K2 observations are 12. The Hellenic Seismological Network Of Crete (HSNC): Validation and results of the 2013 aftershock sequences Science.gov (United States) Chatzopoulos, Georgios; Papadopoulos, Ilias; Vallianatos, Filippos 2015-04-01 The number and quality of seismological networks in Europe has increased in the past decades. Nevertheless, the need for localized networks monitoring areas of great seismic and scientific interest is constant. Hellenic Seismological Network of Crete (HSNC) covers this need for the vicinity of the South Aegean Sea and Crete Island. In the present work with the use of Z-map software (www.seismo.ethz.ch) the spatial variability of Magnitude of Completeness (Mc) is calculated from HSNC's manual analysis catalogue of events for the period 2011 until today, proving the good coverage of HSNC in the areas. Furthermore the 2013, South Aegean seismicity where two large shallow earthquakes occurred in the vicinity of Crete Island, is discussed. The first event takes place on 15th June 2013 in the front of the Hellenic Arc, south from central Crete, while the second one on 12th October, 2013 on the western part of Crete. The two main shocks and their aftershock sequences have been relocated with the use of hypoinverse earthquake location software and an appropriate crust model. The HSNC identified more than 500 and 300 aftershocks respectively followed after the main events. The detailed construction of aftershocks catalogue permits the applicability of modern theories based on complexity sciences as described recently in the frame of non extensive statistical physics. In addition site effects in the stations locations are presented using event and noise recordings. This work was implemented through the project IMPACT-ARC in the framework of action "ARCHIMEDES III-Support of Research Teams at TEI of Crete" (MIS380353) of the Operational Program "Education and Lifelong Learning" and is co-financed by the European Union (European Social Fund) and Greek national funds References A. Tzanis and F. Vallianatos, "Distributed power-law seismicity changes and crustal deformation in the EW Hellenic Arc", Natural Hazards and Earth Systems Sciences, 3, 179-195, 2003 F. Vallianatos, G 13. Promoting seismology education through collaboration between university research scientists and school teachers Science.gov (United States) Brunt, M. R.; Ellins, K. K.; Boyd, D.; Mote, A. S.; Pulliam, J.; Frohlich, C. A. 2012-12-01 Participation in the NSF-sponsored Texas Earth and Space Science (TXESS) Revolution teacher professional development project paved the way for several teachers to receive educational seismometers and join the IRIS Seismograph in Schools program. This, in turn, has led to secondary school teachers working with university seismologists on research projects. Examples are the NSF-EarthScope SIEDCAR (Seismic Investigation of Edge Driven Convection Associated with the Rio Grande Rift) project; field studies to compile felt-reports for Texas earthquakes, some which may have been induced by human activities; and a seismic study of the Texas Gulf Coast to investigate ocean-continent transition processes along a passive margin. Such collaborations are mutually beneficial in nature. They help scientists to accomplish their research objectives, involve teachers and their students in the authentic, inquiry-based science, promote public awareness of such projects, and open the doors to advancement opportunities for those teachers involved. In some cases, bringing together research scientists and teachers results in collaborations that produce publishable research. In order to effectively integrate seismology research into 7-12 grade education, one of us (Brunt) established the Eagle Pass Junior High Seismology Team in connection with IRIS Seismograph in Schools, station EPTX (AS-1 seismograph), to teach students about earthquakes using authentic real-time data. The concept has sparked interest among other secondary teachers, leading to the creation of two similarly organized seismology teams: WPTX (Boyd, Williams Preparatory School, Dallas) and THTX (Mote, Ann Richards School for Young Women Leaders, Austin). Although the educational seismometers are basic instruments, they are effective educational tools. Seismographs in schools offer students opportunities to learn how earthquakes are recorded and how modern seismometers work, to collect and interpret seismic data, and to 14. Geodynamic inversion to constrain the non-linear rheology of the lithosphere Science.gov (United States) Baumann, T. S.; Kaus, Boris J. P. 2015-08-01 One of the main methods to determine the strength of the lithosphere is by estimating it's effective elastic thickness. This method assumes that the lithosphere is a thin elastic plate that floats on the mantle and uses both topography and gravity anomalies to estimate the plate thickness. Whereas this seems to work well for oceanic plates, it has given controversial results in continental collision zones. For most of these locations, additional geophysical data sets such as receiver functions and seismic tomography exist that constrain the geometry of the lithosphere and often show that it is rather complex. Yet, lithospheric geometry by itself is insufficient to understand the dynamics of the lithosphere as this also requires knowledge of the rheology of the lithosphere. Laboratory experiments suggest that rocks deform in a viscous manner if temperatures are high and stresses low, or in a plastic/brittle manner if the yield stress is exceeded. Yet, the experimental results show significant variability between various rock types and there are large uncertainties in extrapolating laboratory values to nature, which leaves room for speculation. An independent method is thus required to better understand the rheology and dynamics of the lithosphere in collision zones. The goal of this paper is to discuss such an approach. Our method relies on performing numerical thermomechanical forward models of the present-day lithosphere with an initial geometry that is constructed from geophysical data sets. We employ experimentally determined creep-laws for the various parts of the lithosphere, but assume that the parameters of these creep-laws as well as the temperature structure of the lithosphere are uncertain. This is used as a priori information to formulate a Bayesian inverse problem that employs topography, gravity, horizontal and vertical surface velocities to invert for the unknown material parameters and temperature structure. In order to test the general methodology 15. Life in the lithosphere, kinetics and the prospects for life elsewhere. Science.gov (United States) Cockell, Charles S 2011-02-13 The global contiguity of life on the Earth today is a result of the high flux of carbon and oxygen from oxygenic photosynthesis over the planetary surface and its use in aerobic respiration. Life's ability to directly use redox couples from components of the planetary lithosphere in a pre-oxygenic photosynthetic world can be investigated by studying the distribution of organisms that use energy sources normally bound within rocks, such as iron. Microbiological data from Iceland and the deep oceans show the kinetic limitations of living directly off igneous rocks in the lithosphere. Using energy directly extracted from rocks the lithosphere will support about six orders of magnitude less productivity than the present-day Earth, and it would be highly localized. Paradoxically, the biologically extreme conditions of the interior of a planet and the inimical conditions of outer space, between which life is trapped, are the locations from which volcanism and impact events, respectively, originate. These processes facilitate the release of redox couples from the planetary lithosphere and might enable it to achieve planetary-scale productivity approximately one to two orders of magnitude lower than that produced by oxygenic photosynthesis. The significance of the detection of extra-terrestrial life is that it will allow us to test these observations elsewhere and establish an understanding of universal relationships between lithospheres and life. These data also show that the search for extra-terrestrial life must be accomplished by 'following the kinetics', which is different from following the water or energy. 16. The lithosphere-asthenosphere system in the Calabrian Arc and surrounding seas Energy Technology Data Exchange (ETDEWEB) Panza, G F [Department of Earth Sciences, University of Trieste, Trieste (Italy); [Abdus Salam International Centre for Theoretical Physics, SAND Group, Trieste (Italy)]. E-mail: [email protected]; Pontevivo, A [Department of Earth Sciences, University of Trieste, Trieste (Italy) 2002-10-01 Through the non-linear inversion of Surface-Wave Tomography data, using as a priori constraints seismic data from literature, it has been possible to define a fairly detailed structural model of the lithosphere-asthenosphere system (thickness, S-wave and P-wave velocities of the crust and of the upper mantle layers) in the Calabrian Arc region (Southern Tyrrhenian Sea, Calabria and the Northern-Western part of the Ionian Sea). The main features identified by our study are: (1) a very shallow (less then 10 km deep) crust-mantle transition in the Southern Tyrrhenian Sea and very low S-wave velocities just below a very thin lid in correspondence of the submarine volcanic bodies in the study area; (2) a shallow and very low S-wave velocity layer in the mantle in the areas of Aeolian islands, of Vesuvius, Ischia and Phlegraean Fields, representing their shallow-mantle magma source; (3) a thickened continental crust and lithospheric doubling in Calabria; (4) a crust about 25 km thick and a mantle velocity profile versus depth consistent with the presence of a continental rifled, now thermally relaxed, lithosphere in the investigated part of the Ionian Sea; (5) the subduction of the Ionian lithosphere towards NW below the Tyrrhenian Basin; (6) the subduction of the Adriatic lithosphere underneath the Vesuvius and Phlegraean Fields. (author) 17. The "Tsunami Earthquake" of 13 April 1923 in Northern Kamchatka: Seismological and Hydrodynamic Investigations Science.gov (United States) Salaree, Amir; Okal, Emile A. 2018-04-01 We present a seismological and hydrodynamic investigation of the earthquake of 13 April 1923 at Ust'-Kamchatsk, Northern Kamchatka, which generated a more powerful and damaging tsunami than the larger event of 03 February 1923, thus qualifying as a so-called "tsunami earthquake". On the basis of modern relocations, we suggest that it took place outside the fault area of the mainshock, across the oblique Pacific-North America plate boundary, a model confirmed by a limited dataset of mantle waves, which also confirms the slow nature of the source, characteristic of tsunami earthquakes. However, numerical simulations for a number of legitimate seismic models fail to reproduce the sharply peaked distribution of tsunami wave amplitudes reported in the literature. By contrast, we can reproduce the distribution of reported wave amplitudes using an underwater landslide as a source of the tsunami, itself triggered by the earthquake inside the Kamchatskiy Bight. 18. The West-African craton margin in eastern Senegal: a seismological study International Nuclear Information System (INIS) Dorbath, Catherine; Dorbath, Louis; Gaulon, Roland; Le Page, Alain 1983-01-01 A vertical short period seismological array was operated for six months in earstern Senegal. Large P wave travel-time anomalies are in fairly good relation with the gravity and geological features. Two-dimensional inversion of the data shows the existence of a major vertical discontinuity extending from the surface to 150-200 km depth. The other heterogeneities are mainly located in the crust and related to specific segments of the regional geology: craton, Mauritanides and Senegalo-Mauritanian basin. The main discontinuity dipping to the east is interpreted as the trace of an old subduction slab. We propose the following geodynamical process to explain the formation of the Mauritanides orogenic belt: continental collision after opening of a back-arc marginal basin in late Precambrian and its closure until Devonian 19. Research and development activities of the Seismology Section for the period January 1982-December 1983 International Nuclear Information System (INIS) Roy, Falguni 1984-01-01 The research and development activities of the Seismology Section of the Bhabha Atomic Research Centre (BARC) at Bombay are reported for the period January 1982-December 1983 in the form of summaries. The Section's activities are mainly directed towards detection of underground nuclear explosions. During the report period 64 signals out of about 12000 seismograms which were examined were identified as the signals due to underground nuclear explosions. The instrumentation work for Kolar rockburst research was almost completed under the collaboration programme of BARC with Bharat Gold Mines Ltd. Analytical methods have been developed for interpreting the frequency-magnitude relation of earthquake. These methods will be useful in the seismic estimation of risk in case only restricted data involving events of low magnitude are available. A list of publications of the staff-members of the Section during the report period is given. (M.G.B.) 20. A Serviced-based Approach to Connect Seismological Infrastructures: Current Efforts at the IRIS DMC Science.gov (United States) 2014-05-01 As part of the COOPEUS initiative to build infrastructure that connects European and US research infrastructures, IRIS has advocated for the development of Federated services based upon internationally recognized standards using web services. By deploying International Federation of Digital Seismograph Networks (FDSN) endorsed web services at multiple data centers in the US and Europe, we have shown that integration within seismological domain can be realized. By deploying identical methods to invoke the web services at multiple centers this approach can significantly ease the methods through which a scientist can access seismic data (time series, metadata, and earthquake catalogs) from distributed federated centers. IRIS has developed an IRIS federator that helps a user identify where seismic data from global seismic networks can be accessed. The web services based federator can build the appropriate URLs and return them to client software running on the scientists own computer. These URLs are then used to directly pull data from the distributed center in a very peer-based fashion. IRIS is also involved in deploying web services across horizontal domains. As part of the US National Science Foundation's (NSF) EarthCube effort, an IRIS led EarthCube Building Block's project is underway. When completed this project will aid in the discovery, access, and usability of data across multiple geoscienece domains. This presentation will summarize current IRIS efforts in building vertical integration infrastructure within seismology working closely with 5 centers in Europe and 2 centers in the US, as well as how we are taking first steps toward horizontal integration of data from 14 different domains in the US, in Europe, and around the world. 1. ObsPy: A Python Toolbox for Seismology - Recent Developments and Applications Science.gov (United States) Megies, T.; Krischer, L.; Barsch, R.; Sales de Andrade, E.; Beyreuther, M. 2014-12-01 ObsPy (http://www.obspy.org) is a community-driven, open-source project dedicated to building a bridge for seismology into the scientific Python ecosystem. It offersa) read and write support for essentially all commonly used waveform, station, and event metadata file formats with a unified interface,b) a comprehensive signal processing toolbox tuned to the needs of seismologists,c) integrated access to all large data centers, web services and databases, andd) convenient wrappers to legacy codes like libtau and evalresp.Python, currently the most popular language for teaching introductory computer science courses at top-ranked U.S. departments, is a full-blown programming language with the flexibility of an interactive scripting language. Its extensive standard library and large variety of freely available high quality scientific modules cover most needs in developing scientific processing workflows. Together with packages like NumPy, SciPy, Matplotlib, IPython, Pandas, lxml, and PyQt, ObsPy enables the construction of complete workflows in Python. These vary from reading locally stored data or requesting data from one or more different data centers through to signal analysis and data processing and on to visualizations in GUI and web applications, output of modified/derived data and the creation of publication-quality figures.ObsPy enjoys a large world-wide rate of adoption in the community. Applications successfully using it include time-dependent and rotational seismology, big data processing, event relocations, and synthetic studies about attenuation kernels and full-waveform inversions to name a few examples. All functionality is extensively documented and the ObsPy tutorial and gallery give a good impression of the wide range of possible use cases.We will present the basic features of ObsPy, new developments and applications, and a roadmap for the near future and discuss the sustainability of our open-source development model. 2. Origin of Starting Earthquakes under Complete Coupling of the Lithosphere Plates and a Base Science.gov (United States) Babeshko, V. A.; Evdokimova, O. V.; Babeshko, O. M.; Zaretskaya, M. V.; Gorshkova, E. M.; Mukhin, A. S.; Gladskoi, I. B. 2018-02-01 The boundary problem of rigid coupling of lithospheric plates modeled by Kirchhoff plates with a base represented by a three-dimensional deformable layered medium is considered. The possibility of occurrence of a starting earthquake in such a block structure is investigated. For this purpose, two states of this medium in the static mode are considered. In the first case, the semi-infinite lithospheric plates in the form of half-planes are at a distance so that the distance between the end faces is different from zero. In the second case, the lithospheric plates come together to zero spacing between them. Calculations have shown that in this case more complex movements of the Earth's surface are possible. Among such movements are the cases described in our previous publications [1, 2]. 3. Determination of intrinsic attenuation in the oceanic lithosphere-asthenosphere system Science.gov (United States) Takeuchi, Nozomu; Kawakatsu, Hitoshi; Shiobara, Hajime; Isse, Takehi; Sugioka, Hiroko; Ito, Aki; Utada, Hisashi 2017-12-01 We recorded P and S waves traveling through the oceanic lithosphere-asthenosphere system (LAS) using broadband ocean-bottom seismometers in the northwest Pacific, and we quantitatively separated the intrinsic (anelastic) and extrinsic (scattering) attenuation effects on seismic wave propagation to directly infer the thermomechanical properties of the oceanic LAS. The strong intrinsic attenuation in the asthenosphere obtained at higher frequency (~3 hertz) is comparable to that constrained at lower frequency (~100 seconds) by surface waves and suggests frequency-independent anelasticity, whereas the intrinsic attenuation in the lithosphere is frequency dependent. This difference in frequency dependence indicates that the strong and broad peak dissipation recently observed in the laboratory exists only in the asthenosphere and provides new insight into what distinguishes the asthenosphere from the lithosphere. 4. Use of along-track magnetic field differences in lithospheric field modelling DEFF Research Database (Denmark) Kotsiaros, Stavros; Finlay, Chris; Olsen, Nils 2015-01-01 . Experiments in modelling the Earth's lithospheric magnetic field with along-track differences are presented here as a proof of concept. We anticipate that use of such along-track differences in combination with east–west field differences, as are now provided by the Swarm satellite constellation......We demonstrate that first differences of polar orbiting satellite magnetic data in the along-track direction can be used to obtain high resolution models of the lithospheric field. Along-track differences approximate the north–south magnetic field gradients for non-polar latitudes. In a test case......, using 2 yr of low altitude data from the CHAMP satellite, we show that use of along-track differences of vector field data results in an enhanced recovery of the small scale lithospheric field, compared to the use of the vector field data themselves. We show that the along-track technique performs... 5. An Equivalent Source Method for Modelling the Lithospheric Magnetic Field Using Satellite and Airborne Magnetic Data DEFF Research Database (Denmark) Kother, Livia Kathleen; Hammer, Magnus Danel; Finlay, Chris . Advantages of the equivalent source method include its local nature and the ease of transforming to spherical harmonics when needed. The method can also be applied in local, high resolution, investigations of the lithospheric magnetic field, for example where suitable aeromagnetic data is available......We present a technique for modelling the lithospheric magnetic field based on estimation of equivalent potential field sources. As a first demonstration we present an application to magnetic field measurements made by the CHAMP satellite during the period 2009-2010. Three component vector field...... for the remaining lithospheric magnetic field consists of magnetic point sources (monopoles) arranged in an icosahedron grid with an increasing grid resolution towards the airborne survey area. The corresponding source values are estimated using an iteratively reweighted least squares algorithm that includes model... 6. Low Seismic Attenuation in Southern New England Lithosphere Implies Little Heating by the Upwelling Asthenosphere Science.gov (United States) Lamoureux, J. M.; Menke, W. H. 2017-12-01 The Northern Appalachian Anomaly (NAA) is a patch of the asthenosphere in southern New England that is unusually hot given its passive margin setting. Previous research has detected large seismic wave delays that imply a temperature of 770 deg C higher than the mantle below the adjacent craton at the same depth. A key outstanding issue is whether the NAA interacts with the lithosphere above it (e.g. by heating it up). We study this issue using Po and So waves from two magnitude >5.5 earthquakes near the Puerto Rico Trench. These waves, propagating in the cold oceanic lithosphere at near Moho speeds, deliver high frequency energy to the shallow continental lithosphere. We hypothesized that: (1) once within the continental lithosphere, Po and So experience attenuation with distance that can be quantified by a quality factor Q, and that (2) any heating of the lithosphere above the NAA would lead to a higher Q than in regions further north or south along the continental margin. Corresponding Po and So velocities would also be lower. The decay rates of Po and So are estimated using least-squares applied to RMS coda amplitudes measured from digital seismograms from stations in northeastern North America, corrected for instrument response. A roughly log-linear decrease in amplitude is observed, corresponding to P and S wave quality factors in the range of 394-1500 and 727-6847, respectively. Measurements are made for four margin-perpendicular geographical bands, with one band overlapping the NAA. We detect no effect on these amplitudes by the NAA; 95% confidence bounds overlap in every case; Furthermore, all quality factors are much higher than the 100 predicted by lab experiments for near-solidus mantle rocks. These results suggest that the NAA is not causing significant heating of the lithosphere above it. The shear velocities, however, are about 10% slower above the NAA - an effect that may be fossil, reflecting processes that occurred millions of years ago. 7. Interaction between mantle and crustal detachments: A nonlinear system controlling lithospheric extension Science.gov (United States) Rosenbaum, Gideon; Regenauer-Lieb, Klaus; Weinberg, Roberto F. 2010-11-01 We use numerical modeling to investigate the development of crustal and mantle detachments during lithospheric extension. Our models simulate a wide range of extensional systems with varying values of crustal thickness and heat flow, showing how strain localization in the mantle interacts with localization in the upper crust and controls the evolution of extensional systems. Model results reveal a richness of structures and deformation styles as a response to a self-organized mechanism that minimizes the internal stored energy of the system by localizing deformation. Crustal detachments, here referred as low-angle normal decoupling horizons, are well developed during extension of overthickened (60 km) continental crust, even when the initial heat flow is relatively low (50 mW m-2). In contrast, localized mantle deformation is most pronounced when the extended lithosphere has a normal crustal thickness (30-40 km) and an intermediate heat flow (60-70 mW m-2). Results show a nonlinear response to subtle changes in crustal thickness or heat flow, characterized by abrupt and sometimes unexpected switches in extension modes (e.g., from diffuse extensional deformation to effective lithospheric-scale rupturing) or from mantle- to crust-dominated strain localization. We interpret this nonlinearity to result from the interference of doming wavelengths in the presence of multiple necking instabilities. Disharmonic crust and mantle doming wavelengths results in efficient communication between shear zones at different lithospheric levels, leading to rupturing of the whole lithosphere. In contrast, harmonic crust and mantle doming inhibits interaction of shear zones across the lithosphere and results in a prolonged history of extension prior to continental breakup. 8. Lithospheric Structure of the Yamato Basin Inferred from Trans-dimensional Inversion of Receiver Functions Science.gov (United States) Akuhara, T.; Nakahigashi, K.; Shinohara, M.; Yamada, T.; Yamashita, Y.; Shiobara, H.; Mochizuki, K. 2017-12-01 The Yamato Basin, located at the southeast of the Japan Sea, has been formed by the back-arc opening of the Japan Sea. Wide-angle reflection surveys have revealed that the basin has anomalously thickened crust compared with a normal oceanic crust [e.g., Nakahigashi et al., 2013] while deeper lithospheric structure has not known so far. Revealing the lithospheric structure of the Yamato Basin will lead to better understanding of the formation process of the Japan Sea and thus the Japanese island. In this study, as a first step toward understanding the lithospheric structure, we aim to detect the lithosphere-asthenosphere boundary (LAB) using receiver functions (RFs). We use teleseismic P waveforms recorded by broad-band ocean-bottom seismometers (BBOBS) deployed at the Yamato Basin. We calculated radial-component RFs using the data with the removal of water reverberations from the vertical-component records [Akuhara et al., 2016]. The resultant RFs are more complicated than those calculated at an on-land station, most likely due to sediment-related reverberations. This complexity does not allow either direct detection of a Ps conversion from the LAB or forward modeling by a simple structure composed of a handful number of layers. To overcome this difficulty, we conducted trans-dimensional Markov Chain Monte Carlo inversion of RFs, where we do not need to assume the number of layers in advance [e.g., Bodin et al., 2012; Sambridge et al., 2014]. Our preliminary results show abrupt velocity reduction at 70 km depth, far greater depth than the expected LAB depth from the age of the lithosphere ( 20 Ma, although still debated). If this low-velocity jump truly reflects the LAB, the anomalously thickened lithosphere will provide a new constraint on the complex formation history of the Japan Sea. Further study, however, is required to deny the possibility that the obtained velocity jump is an artificial brought by the overfitting of noisy data. 9. Estimation of Water Within the Lithospheric Mantle of Central Tibet from Petrological-Geophysical Investigations Science.gov (United States) Vozar, J.; Fullea, J.; Jones, A. G. 2013-12-01 Investigations of the lithosphere and sub-lithospheric upper mantle by integrated petrological-geophysical modeling of magnetotelluric (MT) and seismic surface-wave data, which are differently sensitive to temperature and composition, allows us to reduce the uncertainties associated with modeling these two data sets independently, as commonly undertaken. We use selected INDEPTH MT data, which have appropriate dimensionality and large penetration depths, across central Tibet for 1D modeling. Our deep resistivity models from the data can be classified into two different and distinct groups: (i) the Lhasa Terrane and (ii) the Qiangtang Terrane. For the Lhasa Terrane group, the models show the existence of upper mantle conductive layer localized at depths of 200 km, whereas for the Qiangtang Terrane, this conductive layer is shallower at depths of 120 km. We perform the integrated geophysical-petrological modeling of the MT and surface-wave data using the software package LitMod. The program facilitates definition of realistic temperature and pressure distributions within the upper mantle for given thermal structure and oxide chemistry in the CFMAS system. This allows us to define a bulk geoelectric and seismic model of the upper mantle based on laboratory and xenolith data for the most relevant mantle minerals, and to compute synthetic geophysical observables. Our results suggest an 80-120 km-thick, dry lithosphere in the central part of the Qiangtang Terrane. In contrast, in the central Lhasa Terrane the predicted MT responses are too resistive for a dry lithosphere regardless its thickness; according to seismic and topography data the expected lithospheric thickness is about 200 km. The presence of small amounts of water significantly decreases the electrical resistivity of mantle rocks and is required to fit the MT responses. We test the hypothesis of small amounts of water (ppm scale) in the nominally anhydrous minerals of the lithospheric mantle. Such a small 10. Lithospheric Contributions to Arc Magmatism: Isotope Variations Along Strike in Volcanoes of Honshu, Japan Science.gov (United States) Kersting; Arculus; Gust 1996-06-07 Major chemical exchange between the crust and mantle occurs in subduction zone environments, profoundly affecting the chemical evolution of Earth. The relative contributions of the subducting slab, mantle wedge, and arc lithosphere to the generation of island arc magmas, and ultimately new continental crust, are controversial. Isotopic data for lavas from a transect of volcanoes in a single arc segment of northern Honshu, Japan, have distinct variations coincident with changes in crustal lithology. These data imply that the relatively thin crustal lithosphere is an active geochemical filter for all traversing magmas and is responsible for significant modification of primary mantle melts. 11. Comparing gravity-based to seismic-derived lithosphere densities : A case study of the British Isles and surrounding areas NARCIS (Netherlands) Root, B.C.; Ebbing, J; van der Wal, W.; England, R.W.; Vermeersen, L.L.A. 2017-01-01 Lithospheric density structure can be constructed from seismic tomography, gravity modelling, or using both data sets. The different approaches have their own uncertainties and limitations. This study aims to characterize and quantify some of the uncertainties in gravity modelling of lithosphere 12. The electrical conductivity of the upper mantle and lithosphere from the magnetic signal due to ocean tidal flow DEFF Research Database (Denmark) Schnepf, Neesha Regmi; Kuvshinov, Alexey; Grayver, Alexander galvanically with Earth’s lithosphere (i.e. by direct coupling of the source currents in the ocean with the underlying substrate), enabling conductivity estimations at shallower depths. Here we present the results of determining a 1-D conductivity-depth profile of oceanic lithosphere and upper mantle using... 13. Two-dimensional simulation of the June 11, 2010, flood of the Little Missouri River at Albert Pike Recreational Area, Ouachita National Forest, Arkansas Science.gov (United States) Wagner, Daniel M. 2013-01-01 In the early morning hours of June 11, 2010, substantial flooding occurred at Albert Pike Recreation Area in the Ouachita National Forest of west-central Arkansas, killing 20 campers. The U.S. Forest Service needed information concerning the extent and depth of flood inundation, the water velocity, and flow paths throughout Albert Pike Recreation Area for the flood and for streamflows corresponding to annual exceedence probabilities of 1 and 2 percent. The two-dimensional flow model Fst2DH, part of the Federal Highway Administration’s Finite Element Surface-water Modeling System, and the graphical user interface Surface-water Modeling System (SMS) were used to perform a steady-state simulation of the flood in a 1.5-mile reach of the Little Missouri River at Albert Pike Recreation Area. Peak streamflows of the Little Missouri River and tributary Brier Creek served as inputs to the simulation, which was calibrated to the surveyed elevations of high-water marks left by the flood and then used to predict flooding that would result from streamflows corresponding to annual exceedence probabilities of 1 and 2 percent. The simulated extent of the June 11, 2010, flood matched the observed extent of flooding at Albert Pike Recreation Area. The mean depth of inundation in the camp areas was 8.5 feet in Area D, 7.4 feet in Area C, 3.8 feet in Areas A, B, and the Day Use Area, and 12.5 feet in Lowry’s Camp Albert Pike. The mean water velocity was 7.2 feet per second in Area D, 7.6 feet per second in Area C, 7.2 feet per second in Areas A, B, and the Day Use Area, and 7.6 feet per second in Lowry’s Camp Albert Pike. A sensitivity analysis indicated that varying the streamflow of the Little Missouri River had the greatest effect on simulated water-surface elevation, while varying the streamflow of tributary Brier Creek had the least effect. Simulated water-surface elevations were lower than those modeled by the U.S. Forest Service using the standard-step method, but the 14. COMPOSITIONAL AND THERMAL DIFFERENCES BETWEEN LITHOSPHERIC AND ASTHENOSPHERIC MANTLE AND THEIR INFLUENCE ON CONTINENTAL DELAMINATION Directory of Open Access Journals (Sweden) A. I. Kiselev 2015-01-01 Full Text Available The lower part of lithosphere in collisional orogens may delaminate due to density inversion between the asthenosphere and the cold thickened lithospheric mantle. Generally, standard delamination models have neglected density changes within the crust and the lithospheric mantle, which occur due to phase transitions and compositional variations upon changes of P-T parameters. Our attention is focused on effects of phase and density changes that may be very important and even dominant when compared with the effect of a simple change of the thermal mantle structure. The paper presents the results of numerical modeling for eclogitization of basalts of the lower crust as well as phase composition changes and density of underlying peridotite resulted from tectonic thickening of the lithosphere and its foundering into the asthenosphere. As the thickness of the lower crust increases, the mafic granulite (basalt passes into eclogite, and density inversion occurs at the accepted crust-mantle boundary (P=20 kbar because the newly formed eclogite is heavier than the underlying peridotite by 6 % (abyssal peridotite, according to [Boyd, 1989]. The density difference is a potential energy for delamination of the eclogitic portion of the crust. According to the model, P=70 kbar and T=1300 °C correspond to conditions at the lower boundary of the lithosphere. Assuming the temperature adiabatic distribution within the asthenosphere, its value at the given parameters ranges from 1350 °C to 1400 °C. Density inversion at dry conditions occurs with the identical lithospheric and asthenospheric compositions at the expense of the temperature difference at 100 °C with the density difference of only 0.0022 %. Differences of two other asthenospheric compositions (primitive mantle, and lherzolite KH as compared to the lithosphere (abyssal peridotite are not compensated for by a higher temperature. The asthenospheric density is higher than that of the lithospheric base 15. Samovar: a thermomechanical code for modeling of geodynamic processes in the lithosphere-application to basin evolution DEFF Research Database (Denmark) Elesin, Y; Gerya, T; Artemieva, Irina 2010-01-01 We present a new 2D finite difference code, Samovar, for high-resolution numerical modeling of complex geodynamic processes. Examples are collision of lithospheric plates (including mountain building and subduction) and lithosphere extension (including formation of sedimentary basins, regions...... of extended crust, and rift zones). The code models deformation of the lithosphere with viscoelastoplastic rheology, including erosion/sedimentation processes and formation of shear zones in areas of high stresses. It also models steady-state and transient conductive and advective thermal processes including...... partial melting and magma transport in the lithosphere. The thermal and mechanical parts of the code are tested for a series of physical problems with analytical solutions. We apply the code to geodynamic modeling by examining numerically the processes of lithosphere extension and basin formation... 16. "Earth, from inside and outside - school activities based on seismology and astronomy" Science.gov (United States) 2016-04-01 Through a multidisciplinary work that integrates Geography education with the other Earth Sciences, we developed an educational project to raise the students' awareness of seismic hazard and to disseminate good practices of earthquake safety. The Romanian Educational Seismic Network (ROEDUSEIS) project (started in 2012) is developed and implemented in partnership with schools from different Romanian cities, our school being one of these. In each participating school a SEP educational seismometer is installed. It is the first educational initiative in Romania in the field of seismology involving the National Institute for Earth Physics - NIEP as coordinator. The e-learning platform website (http://www.roeduseis.ro) represents a great opportunity for students to use real advanced research instruments and scientific data analysis tools in their everyday school activities and a link to observations of Earth phenomena and Earth science in general. The most important educational objectives are related to: preparing comprehensive educational materials as resources for training students and teachers in the analysis and interpretation of seismological data, experimentation of new technologies in projecting and implementing new didactic activities, professional development and support for teachers and development of science curriculum module. The scientific objective is to introduce in schools the use of scientific instruments like seismometer and experimental methods (seismic data analysis). The educational materials entitled "Earthquakes and their effects" is organized in a guide for teachers accompanied by a booklet for students. The structure of the educational material is divided in theoretical chapters followed by sections with activities and experiments adapted to the level of understanding particular to our students. The ROEDUSEIS e-platform should be considered as a modern method for teaching and learning that integrates and completes the work in classroom. The 17. Proceedings of the OECD/NEA workshop on the relations between seismological data and seismic engineering International Nuclear Information System (INIS) 2003-01-01 The Committee on the Safety of Nuclear Installations (CSNI) of the OECD-NEA co-ordinates the NEA activities concerning the technical aspects of design, construction and operation of nuclear installations insofar as they affect the safety of such installations. The Integrity and Ageing Working Group (IAGE WG) of the CSNI deals with the integrity of structures and components, and has three sub-groups, dealing with the integrity of metal components and structures, ageing of concrete structures, and the seismic behaviour of structures. The sub-group dealing with the seismic behaviour of structures proposed this workshop. The OECD-NEA workshop on the relations between seismological data and seismic engineering analyses was held on October 17-18, 2002. A field visits in the Izmit area where the fault scarp is still visible was organised on Wednesday October 16, 2002. The Ttirkiye Atom Enerjisi Kurumu, TAEK (Turkish Atomic Energy Agency) in Istanbul, Turkey, hosted the workshop. A recommendation of the OECD workshop on the engineering characterisation of seismic input (hosted by the United States Nuclear Regulatory Commission and organised by Brookhaven National Laboratory on November 15-17, 1999) was to foster the growth of interaction between 'design engineers' and 'ground motion specialists'. The objective of the Istanbul workshop is to address this recommendation. The workshop gave seismologists the opportunity to present observed damages and their related ground motions and design engineers the opportunity to present current techniques used in the evaluation of seismic hazards. Bridging the gap between these two fields was a key objective - this workshop was a forum for bringing together the two communities. In addition, the location of the workshop was particularly interesting and provided possibilities for several of the host country participants to discuss the 1999 Kocaeli earthquake. On the basis of lessons learned from large earthquakes over the last decade, the 18. Rebuild of the Bulletin of the International Seismological Centre (ISC), part 1: 1964-1979 Science.gov (United States) Storchak, Dmitry A.; Harris, James; Brown, Lonn; Lieser, Kathrin; Shumba, Blessing; Verney, Rebecca; Di Giacomo, Domenico; Korger, Edith I. M. 2017-12-01 The data from the Bulletin of the International Seismological Centre (ISC) have always been and still remain in demand for a wide range of studies in Geosciences. The unique features of the Bulletin include long-term coverage (1904-present), the most comprehensive set of included seismic data from the majority of permanent seismic networks at any given time in the history of instrumental recording (currently 150) and homogeneity of the data and their representation. In order to preserve this homogeneity, the ISC has followed its own standard seismic event processing procedures that have not substantially changed until the early 2000s. Several considerable and necessary advancements in the ISC data collection and seismic event location procedures have created a need to rebuild the data for preceding years in line with the new procedures. Thus was set up a project to rebuild the ISC Bulletin for the period from the beginning of the ISC data till the end of data year 2010. The project is known as the Rebuild of the ISC Bulletin. From data month of January 2011, the ISC data have already been processed with the fully tested and established new procedures and do not require an alteration. It was inconceivable even to think about such a project for many tens of years, but great advances in computer power and increased support by the ISC Member-Institutions and Sponsors have given us a chance to perform this project. Having obtained a lot of experience on the way, we believe that within a few years the entire period of the ISC data will be reprocessed and extended for the entire period of instrumental seismological recordings from 1904 till present. The purpose of this article is to describe the work on reprocessing the ISC Bulletin data under the Rebuild project. We also announce the release of the rebuilt ISC Bulletin for the period 1964-1979 with all seismic events reprocessed and relocated in line with the modern ISC procedures, 68,000 new events, 255 new stations 19. It’s our Fault: Immersing Young Learners in Authentic Practices of Seismology Science.gov (United States) Kilb, D. L.; Moher, T.; Wiley, J. 2009-12-01 The scalable RoomQuake seismology project uses a learning technology framework-embedded phenomena (Moher, 2006)—that simulates seismic phenomena mapped directly onto the physical space of classrooms. This project, aimed at the upper elementary level, situates students as the scientists engaged in an extended investigation designed to discover the spatial, temporal, and intensity distributions of a series of earthquakes. This project emulates earthquake occurrence over a condensed time and spatial span, with students mapping an earthquake fault imagined to be running through their classroom. The students learn: basic seismology terms; ability to identify seismic P- and S-waves; skills associated with trilateration; nomogram/graph reading skills; and the ability to recognize the emergence of a fault based on RoomQuake geometries. From the students’ perspectives, and similar to real-world earthquakes, RoomQuakes occur at unknown times over the course of several weeks. Multiple computers distributed around the perimeter of the classroom serve as simulated seismographs that depict continuous strip-chart seismic recordings. Most of the time the seismograms reflect background noise, but at (apparently) unpredictable times a crescendoing rumble (emanating from a subwoofer) signals a RoomQuake. Hearing this signal, students move to the seismic stations to read the strip charts. Next, the students trilaterate the RoomQuake epicenter by arcing calibrated strings of length proportional to S-P latencies from each seismic station until a common point is identified. Each RoomQuake epicenter is marked by hanging a Styrofoam ball (color-coded by magnitude) from the ceiling. The developing ‘fault’ within the classroom provides an immersive historic record of the RoomQuake’s spatial distribution. Students also maintain a temporal record of events on a large time-line on the wall (recognizing time-related phenomena like aftershocks) and a record of magnitude frequencies on 20. Complex inner core of the Earth: The last frontier of global seismology Science.gov (United States) Tkalčić, Hrvoje 2015-03-01 The days when the Earth's inner core (IC) was viewed as a homogeneous solid sphere surrounded by the liquid outer core (OC) are now behind us. Due to a limited number of data sampling the IC and a lack of experimentally controlled conditions in the deep Earth studies, it has been difficult to scrutinize competitive hypotheses in this active area of research. However, a number of new concepts linking IC structure and dynamics has been proposed lately to explain different types of seismological observations. A common denominator of recent observational work on the IC is increased complexity seen in IC physical properties such as its isotropic and anisotropic structure, attenuation, inner core boundary (ICB) topography, and its rotational dynamics. For example, small-scale features have been observed to exist as a widespread phenomenon in the uppermost inner core, probably superimposed on much longer-scale features. The characterization of small-scale features sheds light on the nature of the solidification process and helps in understanding seismologically observed hemispherical dichotomy of the IC. The existence of variations in the rate and level of solidification is a plausible physical outcome in an environment where vigorous compositional convection in the OC and variations in heat exchange across the ICB may control the process of crystal growth. However, further progress is hindered by the fact that the current traveltime data of PKIKP waves traversing the IC do not allow discriminating between variations in isotropic P wave velocity and velocity anisotropy. Future studies of attenuation in the IC might provide crucial information about IC structure, although another trade-off exists—that of the relative contribution of scattering versus viscoelastic attenuation and the connection with the material properties. Future installations of dense arrays, cross paths of waves that sample the IC, and corresponding array studies will be a powerful tool to image and 1. Autonomous BBOBS-NX (NX-2G) for New Era of Ocean Bottom Broadband Seismology Science.gov (United States) Shiobara, H.; Ito, A.; Sugioka, H.; Shinohara, M. 2017-12-01 The broadband ocean bottom seismometer (BBOBS) and its new generation system (BBOBS-NX) have been developed in Japan, and we performed several test and practical observations to create and establish a new category of the ocean floor broadband seismology, since 1999. Now, the data obtained by our BBOBS and BBOBS-NX is proved to be adequate for broadband seismic analyses. Especially, the BBOBS-NX can obtain the horizontal data comparable to land sites in longer periods (10 s -). Moreover, the BBOBST-NX is in practical evaluation for the mobile tilt observation that enables dense geodetic monitoring. The BBOBS-NX system is a powerful tool, although, it has intrinsic limitation of the ROV operation. If this system can be used without the ROV, like as the BBOBS, it should lead us a true breakthrough of ocean bottom seismology. Hereafter, the new autonomous BBOBS-NX is noted as NX-2G in short. The main problem to realize the NX-2G is a tilt of the sensor unit on landing, which exceed the acceptable limit (±8°) in about 50%. As we had no evidence at which moment and how this tilt occurred, we tried to observe it during the BBOBST-NX landing in 2015 by attaching a video camera and an acceleration logger. The result shows that the tilt on landing was determined by the final posture of the system at the penetration into the sediment, and the large oscillating tilt more than ±10° was observed in descending. The function of the NX-2G system is based on 3 stage operations as shown in the image. The glass float is aimed not only to obtain enough buoyancy to extract the sensor unit, but also to suppress the oscillating tilt of the system in descending. In Oct. 2016, we made the first in-situ test of the NX-2G system with a ROV. It was dropped from the sea surface with the video camera and the acceleration logger. The ROV was used to watch the operation of the system at the seafloor. The landing looked well and it was examined from the acceleration data. As the maximum tilt in 2. Towards monitoring the englacial fracture state using virtual-reflector seismology Science.gov (United States) Lindner, F.; Weemstra, C.; Walter, F.; Hadziioannou, C. 2018-04-01 In seismology, coda wave interferometry (CWI) is an effective tool to monitor time-lapse changes using later arriving, multiply scattered coda waves. Typically, CWI relies on an estimate of the medium's impulse response. The latter is retrieved through simple time-averaging of receiver-receiver cross-correlations of the ambient field, i.e. seismic interferometry (SI). In general, the coda are induced by heterogeneities in the Earth. Being comparatively homogeneous, however, ice bodies such as glaciers and ice sheets exhibit little scattering. In addition, the temporal stability of the time-averaged cross-correlations suffers from temporal variations in the distribution and amplitude of the passive seismic sources. Consequently, application of CWI to ice bodies is currently limited. Nevertheless, fracturing and changes in the englacial macroscopic water content alter the bulk elastic properties of ice bodies, which can be monitored with cryoseismological measurements. To overcome the current limited applicability of CWI to ice bodies, we therefore introduce virtual-reflector seismology (VRS). VRS relies on a so-called multidimensional deconvolution (MDD) process of the time-averaged crosscorrelations. The technique results in the retrieval of a medium response that includes virtual reflections from a contour of receivers enclosing the region of interest (i.e., the region to be monitored). The virtual reflections can be interpreted as artificial coda replacing the (lacking) natural scattered coda. Hence, this artificial coda might be exploited for the purpose of CWI. From an implementation point of view, VRS is similar to SI by MDD, which, as its name suggests, also relies on a multidimensional deconvolution process. SI by MDD, however, does not generate additional virtual reflections. Advantageously, both techniques mitigate spurious coda changes associated with temporal variations in the distribution and amplitude of the passive seismic sources. In this work, we 3. Using Social Networks to Educate Seismology to Non-Science Audiences in Costa Rica Science.gov (United States) 2013-12-01 Costa Rica has a very high rate of seismicity with 63 damaging earthquakes in its history as a nation and 12 felt earthquakes per month on average. In Costa Rica, earthquakes are part of everyday life; hence the inhabitants are highly aware of seismic activity and geological processes. However, formal educational programs and mainstream media have not yet addressed the appropriate way of educating the public on these topics, thus myths and misconceptions are common. With the increasing influence of social networks on information diffusion, they have become a new channel to address this issue in Costa Rica. The National Seismological Network of Costa Rica (RSN) is a joint effort between the University of Costa Rica and the Costa Rican Institute of Electricity. Since 1973, the RSN studies the seismicity and volcanic activity in the country. Starting on January 2011 the RSN has an active Facebook Page, in which felt earthquakes are reported and information on Seismology, geological processes, scientific talks, and RSN activities are routinely posted. Additionally, RSN gets almost instantaneous feedback from RSN followers including people from all rural and urban areas of Costa Rica. In this study, we analyze the demographics, geographic distribution, reach of specific Facebook posts per topic, and the episodic growth of RSN followers related to specific seismic events. We observe that 70 % of the RSN users are between ages from 18 to 34. We consistently observe that certain regions of the country have more Facebook activity, although those regions are not the most populated nor have a high connectivity index. We interpret this pattern as the result of a higher awareness to geological hazards in those specific areas. We notice that educational posts are as well 'liked' as most earthquake reports. For exceptional seismic events, we observe sudden increments in the number of RSN followers in the order of tens of thousands. For example, the May 2013 Sixaola earthquake (Mw 4. Using open sidewalls for modelling self-consistent lithosphere subduction dynamics NARCIS (Netherlands) Chertova, M.V.; Geenen, T.; van den Berg, A.; Spakman, W. 2012-01-01 Subduction modelling in regional model domains, in 2-D or 3-D, is commonly performed using closed (impermeable) vertical boundaries. Here we investigate the merits of using open boundaries for 2-D modelling of lithosphere subduction. Our experiments are focused on using open and closed (free 5. Abnormal lithium isotope composition from the ancient lithospheric mantle beneath the North China Craton. Science.gov (United States) Tang, Yan-Jie; Zhang, Hong-Fu; Deloule, Etienne; Su, Ben-Xun; Ying, Ji-Feng; Santosh, M; Xiao, Yan 2014-03-04 Lithium elemental and isotopic compositions of olivines in peridotite xenoliths from Hebi in the North China Craton provide direct evidence for the highly variable δ(7)Li in Archean lithospheric mantle. The δ(7)Li in the cores of olivines from the Hebi high-Mg# peridotites (Fo > 91) show extreme variation from -27 to +21, in marked deviation from the δ(7)Li range of fresh MORB (+1.6 to +5.6) although the Li abundances of the olivines are within the range of normal mantle (1-2 ppm). The Li abundances and δ(7)Li characteristics of the Hebi olivines could not have been produced by recent diffusive-driven isotopic fractionation of Li and therefore the δ(7)Li in the cores of these olivines record the isotopic signature of the subcontinental lithospheric mantle. Our data demonstrate that abnormal δ(7)Li may be preserved in the ancient lithospheric mantle as observed in our study from the central North China Craton, which suggest that the subcontinental lithospheric mantle has experienced modification of fluid/melt derived from recycled oceanic crust. 6. A Seismic Transmission System for Continuous Monitoring of the Lithosphere : A Proposition NARCIS (Netherlands) Unger, R. 2002-01-01 The main objective of this thesis is to enhance earthquake prediction feasibility. We present the concept and the design layout of a novel seismic transmission system capable of continuously monitoring the Lithosphere for changes in Earth physics parameters governing seismic wave propagation. 7. Spatial patterns in the distribution of kimberlites: relationship to tectonic processes and lithosphere structure DEFF Research Database (Denmark) Chemia, Zurab; Artemieva, Irina; Thybo, Hans 2015-01-01 of kimberlite melts through the lithospheric mantle, which forms the major pipe. Stage 2 (second-order process) begins when the major pipe splits into daughter sub-pipes (tree-like pattern) at crustal depths. We apply cluster analysis to the spatial distribution of all known kimberlite fields with the goal... 8. Spatial Patterns in Distribution of Kimberlites: Relationship to Tectonic Processes and Lithosphere Structure DEFF Research Database (Denmark) Chemia, Zurab; Artemieva, Irina; Thybo, Hans 2014-01-01 of kimberlite melts through the lithospheric mantle, which forms the major pipe. Stage 2 (second-order process) begins when the major pipe splits into daughter sub-pipes (tree-like pattern) at crustal depths. We apply cluster analysis to the spatial distribution of all known kimberlite fields with the goal... 9. Images of lithospheric heterogeneities in the Armorican segment of the Hercynian Range in France Czech Academy of Sciences Publication Activity Database Judenherc, S.; Granet, M.; Brun, J. P.; Poupinet, G.; Plomerová, Jaroslava; Mocquet, A.; Achauer, U. 2002-01-01 Roč. 358, 1/4 (2002), s. 121-134 ISSN 0040-1951 Institutional research plan: CEZ:AV0Z3012916 Keywords : seismic tomography * seismic anisotropy * continental collision * Hercynian lithosphere Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 1.409, year: 2002 10. Pool Structures: A New Type of Interaction Zones of Lithospheric Plate Flows Science.gov (United States) Garetskyi, R. G.; Leonov, M. G. 2018-02-01 Study of tectono-geodynamic clusters of the continental lithosphere (the Sloboda cluster of the East European Platform and the Pamir cluster of Central Asia) permitted identification of pool structures, which are a specific type of zone of intraplate interaction of rock masses. 11. Modeling the interaction between lithospheric and surface processes in foreland basins NARCIS (Netherlands) Garcia-Castellanos, D.; Cloetingh, S. 2012-01-01 This chapter reviews a number of key advances in quantitative understanding of foreland basins since the early 1990s, with a focus on the interplay between lithospheric flexure, erosion, and river transport. Flexure can be the result of topographic loading and slab-pull forces, though can also 12. Lithosphere structure and upper mantle characteristics below the Bay of Bengal Digital Repository Service at National Institute of Oceanography (India) Rao, G.S.; Radhakrishna, M.; Sreejith, K.M.; Krishna, K.S.; Bull, J.M. The oceanic lithosphere in the Bay of Bengal (BOB) formed 80-120 Ma following the breakup of eastern Gondwanaland. Since its formation, it has been affected by the emplacement of two long N-S trending linear aseismic ridges (85°E and Ninetyeast... 13. Seismic and mechanical anisotropy and the past and present deformation of the Australian lithosphere NARCIS (Netherlands) Simons, Frederik J.; Hilst, R.D. van der 2003-01-01 We interpret the three-dimensional seismic wave-speed structure of the Australian upper mantle by comparing its azimuthal anisotropy to estimates of past and present lithospheric deformation. We infer the fossil strain field from the orientation of gravity anomalies relative to topography, 14. Lithospheric-scale structures from the perspective of analogue continental collision. NARCIS (Netherlands) Sokoutis, D.; Burg, J.P.; Bonini, M.; Corti, G.; Cloetingh, S.A.P.L. 2005-01-01 Analogue models were employed to investigate continental collision addressing the roles of (1) a suture zone separating different crustal blocks, (2) mid-crustal weak layers and (3) mantle strengths. These models confirmed that low-amplitude lithospheric and crustal buckling is the primary response 15. Localization instability and the origin of regularly- spaced faults in planetary lithospheres Science.gov (United States) Montesi, Laurent Gilbert Joseph 2002-10-01 Brittle deformation is not distributed uniformly in planetary lithospheres but is instead localized on faults and ductile shear zones. In some regions such as the Central Indian Basin or martian ridged plains, localized shear zones display a characteristic spacing. This pattern can constrain the mechanical structure of the lithosphere if a model that includes the development of localized shear zones and their interaction with the non- localizing levels of the lithosphere is available. I construct such a model by modifying the buckling analysis of a mechanically-stratified lithosphere idealization, by allowing for rheologies that have a tendency to localize. The stability of a rheological system against localization is indicated by its effective stress exponent, ne. That quantity must be negative for the material to have a tendency to localize. I show that a material deforming brittly or by frictional sliding has ne mechanical properties. When this model is subjected to horizontal extension or compression, infinitesimal perturbation of its interfaces grow at a rate that depends on their wavelength. Two superposed instabilities develop if ne Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253- 1690.) 16. Ancient melt depletion overprinted by young carbonatitic metasomatism in the New Zealand lithospheric mantle DEFF Research Database (Denmark) Scott, James M.; Hodgkinson, A.; Palin, J.M. 2014-01-01 radiogenic than, the HIMU mantle reservoir. Metasomatism appears to pre-date ubiquitous pyroxene core to rim Al diffusion zoning, which may have resulted from cooling of the lithospheric mantle following cessation of Late Cretaceous-Eocene rifting of Zealandia from Gondwana. Nd isotope data, however, suggest... 17. Earthquake rupture below the brittle-ductile transition in continental lithospheric mantle. Science.gov (United States) Prieto, Germán A; Froment, Bérénice; Yu, Chunquan; Poli, Piero; Abercrombie, Rachel 2017-03-01 Earthquakes deep in the continental lithosphere are rare and hard to interpret in our current understanding of temperature control on brittle failure. The recent lithospheric mantle earthquake with a moment magnitude of 4.8 at a depth of ~75 km in the Wyoming Craton was exceptionally well recorded and thus enabled us to probe the cause of these unusual earthquakes. On the basis of complete earthquake energy balance estimates using broadband waveforms and temperature estimates using surface heat flow and shear wave velocities, we argue that this earthquake occurred in response to ductile deformation at temperatures above 750°C. The high stress drop, low rupture velocity, and low radiation efficiency are all consistent with a dissipative mechanism. Our results imply that earthquake nucleation in the lithospheric mantle is not exclusively limited to the brittle regime; weakening mechanisms in the ductile regime can allow earthquakes to initiate and propagate. This finding has significant implications for understanding deep earthquake rupture mechanics and rheology of the continental lithosphere. 18. A note on 2-D lithospheric deformation due to a blind strike-slip fault mic deformation. Several researchers have devel- oped models of coseismic lithospheric deformation. Rybicki (1971) found a closed-form analytical solu- tion for the problem of a long vertical strike-slip fault in a two-layer model of the earth. Chinnery and Jovanovich (1972) extended the solution to a three-layer model. 19. Shallow and buoyant lithospheric subduction : causes and implications from thermo-chemical numerical modeling NARCIS (Netherlands) Hunen, Jeroen van 2001-01-01 Where two lithospheric plates converge on the Earth, one of them disappears into the mantle. The dominant driving mechanism for plate motion is regarded to be slab pull': the subducted plate, the slab, exerts a pulling force on the attached plate at the surface. However, what has been puzzling 20. Earth's lithospheric magnetic field determined to spherical harmonic degree 90 from CHAMP satellite measurements DEFF Research Database (Denmark) Maus, S.; Rother, M.; Hemant, K. 2006-01-01 of the lithospheric field down to an altitude of about 50 km at lower latitudes, with reduced accuracy in the polar regions. Crustal features come out significantly sharper than in previous models. In particular, bands of magnetic anomalies along subduction zones become visible by satellite for the first time.... 1. Influence of the lithosphere-asthenosphere boundary on the stress field northwest of the Alps Science.gov (United States) Maury, J.; Cornet, F. H.; Cara, M. 2014-11-01 In 1356, a magnitude 6-7 earthquake occurred near Basel, in Switzerland. But recent compilations of GPS measurements reveal that measured horizontal deformation rates in northwestern continental Europe are smaller than error bars on the measurements, proving present tectonic activity, if any, is very small in this area. We propose to reconcile these apparently antinomic observations with a mechanical model of the lithosphere that takes into account the geometry of the lithosphere-asthenosphere boundary, assuming that the only loading mechanism is gravity. The lithosphere is considered to be an elastoplastic material satisfying a Von Mises plasticity criterion. The model, which is 400 km long, 360 km wide and 230 km thick, is centred near Belfort in eastern France, with its width oriented parallel to the N145°E direction. It also takes into account the real topography of both the ground surface and that of the Moho discontinuity. Not only does the model reproduce observed principal stress directions orientations, it also identifies a plastic zone that fits roughly the most seismically active domain of the region. Interestingly, a somewhat similar stress map may be produced by considering an elastic lithosphere and an ad-hoc horizontal tectonic' stress field. However, for the latter model, examination of the plasticity criterion suggests that plastic deformation should have taken place. It is concluded that the present-day stress field in this region is likely controlled by gravity and rheology, rather than by active Alpine tectonics. 2. Seismology in Schools an integrated approach to funding developing and implementing a coordinated programme for teachers and high school students Science.gov (United States) Blake, T. A.; Jones, A. G.; Campbell, G. 2010-12-01 Statistics in Ireland show that physics at Advanced Level in Secondary Schools is declining in popularity and is the most likely subject to be cut first from the curriculum in a curriculum readjustment by school authorities. In an attempt to attract students to study Earth science and seismology the School of Cosmic Physics, DIAS embarked on an outreach programme in 2007 to promote Earth science, particularly seismology, in schools at both Primary and Secondary Levels. Since its inception, DIAS's Seismology in Schools programme has been very well received, with seismometers installed in over fifty schools across the State. Although this number may appear small, given that the population of Ireland is 4M this number of 1 per 80,000 compares favourably with the U.K. (70 in a population of 70M, 1 per 1M) and the U.S.A. (200 in a population of 300M, 1 per 1.5M) with an penetration of 15-20 times greater. The phenomenal success of our Seismology in Schools programme has been helped significantly by the support we have received from the British Geological Survey (BGS) and IRIS (Incorporated Research Institutions for Seismology) in terms of hardware, software and advice. Similarly, the programme would be a pale reflection of what it is today if the Directors of the Educational Centres (ATECI, Association of Teacher's/Education Centres in Ireland) across Ireland had not become enthused and funded the purchase of 34 additional seismometers, and the Geological Survey of Ireland purchased a further six. Also, funding support from Discover Science and Engineering (DSE) was absolutely critical for us to roll out this hugely enlarged programme of 50 seismometers from the originally envisioned four. As this programme is an initiation into seismology for students, it is important to stress that the seismometer is not used in the schools as a professional recording instrument but helps students visualize what seismology and the recording of earthquakes comprises. Essential to the 3. The 21 August 2017 Ischia (Italy) Earthquake Source Model Inferred From Seismological, GPS, and DInSAR Measurements Science.gov (United States) De Novellis, V.; Carlino, S.; Castaldo, R.; Tramelli, A.; De Luca, C.; Pino, N. A.; Pepe, S.; Convertito, V.; Zinno, I.; De Martino, P.; Bonano, M.; Giudicepietro, F.; Casu, F.; Macedonio, G.; Manunta, M.; Cardaci, C.; Manzo, M.; Di Bucci, D.; Solaro, G.; Zeni, G.; Lanari, R.; Bianco, F.; Tizzani, P. 2018-03-01 The causative source of the first damaging earthquake instrumentally recorded in the Island of Ischia, occurred on 21 August 2017, has been studied through a multiparametric geophysical approach. In order to investigate the source geometry and kinematics we exploit seismological, Global Positioning System, and Sentinel-1 and COSMO-SkyMed differential interferometric synthetic aperture radar coseismic measurements. Our results indicate that the retrieved solutions from the geodetic data modeling and the seismological data are plausible; in particular, the best fit solution consists of an E-W striking, south dipping normal fault, with its center located at a depth of 800 m. Moreover, the retrieved causative fault is consistent with the rheological stratification of the crust in this zone. This study allows us to improve the knowledge of the volcano-tectonic processes occurring on the Island, which is crucial for a better assessment of the seismic risk in the area. 4. floodzones_ouachita_FEMA_1999 Data.gov (United States) Louisiana Geographic Information Center — The Q3 Flood Data are derived from the Flood Insurance Rate Maps (FIRMS) published by the Federal Emergency Management Agency (FEMA). The file is georeferenced to... 5. Using crustal thickness and subsidence history on the Iberia-Newfoundland margins to constrain lithosphere deformation modes during continental breakup Science.gov (United States) Jeanniot, Ludovic; Kusznir, Nick; Manatschal, Gianreto; Mohn, Geoffroy 2014-05-01 Observations at magma-poor rifted margins such as Iberia-Newfoundland show a complex lithosphere deformation history during continental breakup and seafloor spreading initiation leading to complex OCT architecture with hyper-extended continental crust and lithosphere, exhumed mantle and scattered embryonic oceanic crust and continental slivers. Initiation of seafloor spreading requires both the rupture of the continental crust and lithospheric mantle, and the onset of decompressional melting. Their relative timing controls when mantle exhumation may occur; the presence or absence of exhumed mantle provides useful information on the timing of these events and constraints on lithosphere deformation modes. A single lithosphere deformation mode leading to continental breakup and sea-floor spreading cannot explain observations. We have determined the sequence of lithosphere deformation events for two profiles across the present-day conjugate Iberia-Newfoundland margins, using forward modelling of continental breakup and seafloor spreading initiation calibrated against observations of crustal basement thickness and subsidence. Flow fields, representing a sequence of lithosphere deformation modes, are generated by a 2D finite element viscous flow model (FeMargin), and used to advect lithosphere and asthenosphere temperature and material. FeMargin is kinematically driven by divergent deformation in the upper 15-20 km of the lithosphere inducing passive upwelling beneath that layer; extensional faulting and magmatic intrusions deform the topmost upper lithosphere, consistent with observations of deformation processes occurring at slow spreading ocean ridges (Cannat, 1996). Buoyancy enhanced upwelling, as predicted by Braun et al. (2000) is also kinematically included in the lithosphere deformation model. Melt generation by decompressional melting is predicted using the parameterization and methodology of Katz et al. (2003). The distribution of lithosphere deformation, the 6. Highly CO2-supersaturated melts in the Pannonian lithospheric mantle - A transient carbon reservoir? Science.gov (United States) Créon, Laura; Rouchon, Virgile; Youssef, Souhail; Rosenberg, Elisabeth; Delpech, Guillaume; Szabó, Csaba; Remusat, Laurent; Mostefaoui, Smail; Asimow, Paul D.; Antoshechkina, Paula M.; Ghiorso, Mark S.; Boller, Elodie; Guyot, François 2017-08-01 Subduction of carbonated crust is widely believed to generate a flux of carbon into the base of the continental lithospheric mantle, which in turn is the likely source of widespread volcanic and non-volcanic CO2 degassing in active tectonic intracontinental settings such as rifts, continental margin arcs and back-arc domains. However, the magnitude of the carbon flux through the lithosphere and the budget of stored carbon held within the lithospheric reservoir are both poorly known. We provide new constraints on the CO2 budget of the lithospheric mantle below the Pannonian Basin (Central Europe) through the study of a suite of xenoliths from the Bakony-Balaton Highland Volcanic Field. Trails of secondary fluid inclusions, silicate melt inclusions, networks of melt veins, and melt pockets with large and abundant vesicles provide numerous lines of evidence that mantle metasomatism affected the lithosphere beneath this region. We obtain a quantitative estimate of the CO2 budget of the mantle below the Pannonian Basin using a combination of innovative analytical and modeling approaches: (1) synchrotron X-ray microtomography, (2) NanoSIMS, Raman spectroscopy and microthermometry, and (3) thermodynamic models (Rhyolite-MELTS). The three-dimensional volumes reconstructed from synchrotron X-ray microtomography allow us to quantify the proportions of all petrographic phases in the samples and to visualize their textural relationships. The concentration of CO2 in glass veins and pockets ranges from 0.27 to 0.96 wt.%, higher than in typical arc magmas (0-0.25 wt.% CO2), whereas the H2O concentration ranges from 0.54 to 4.25 wt.%, on the low end for estimated primitive arc magmas (1.9-6.3 wt.% H2O). Trapping pressures for vesicles were determined by comparing CO2 concentrations in glass to CO2 saturation as a function of pressure in silicate melts, suggesting pressures between 0.69 to 1.78 GPa. These values are generally higher than trapping pressures for fluid inclusions 7. Separation of Stochastic and Deterministic Information from Seismological Time Series with Nonlinear Dynamics and Maximum Entropy Methods International Nuclear Information System (INIS) Gutierrez, Rafael M.; Useche, Gina M.; Buitrago, Elias 2007-01-01 We present a procedure developed to detect stochastic and deterministic information contained in empirical time series, useful to characterize and make models of different aspects of complex phenomena represented by such data. This procedure is applied to a seismological time series to obtain new information to study and understand geological phenomena. We use concepts and methods from nonlinear dynamics and maximum entropy. The mentioned method allows an optimal analysis of the available information 8. VERCE: a productive e-Infrastructure and e-Science environment for data-intensive seismology research Science.gov (United States) Vilotte, J. P.; Atkinson, M.; Spinuso, A.; Rietbrock, A.; Michelini, A.; Igel, H.; Frank, A.; Carpené, M.; Schwichtenberg, H.; Casarotti, E.; Filgueira, R.; Garth, T.; Germünd, A.; Klampanos, I.; Krause, A.; Krischer, L.; Leong, S. H.; Magnoni, F.; Matser, J.; Moguilny, G. 2015-12-01 Seismology addresses both fundamental problems in understanding the Earth's internal wave sources and structures and augmented societal applications, like earthquake and tsunami hazard assessment and risk mitigation; and puts a premium on open-data accessible by the Federated Digital Seismological Networks. The VERCE project, "Virtual Earthquake and seismology Research Community e-science environment in Europe", has initiated a virtual research environment to support complex orchestrated workflows combining state-of-art wave simulation codes and data analysis tools on distributed computing and data infrastructures (DCIs) along with multiple sources of observational data and new capabilities to combine simulation results with observational data. The VERCE Science Gateway provides a view of all the available resources, supporting collaboration with shared data and methods, with data access controls. The mapping to DCIs handles identity management, authority controls, transformations between representations and controls, and access to resources. The framework for computational science that provides simulation codes, like SPECFEM3D, democratizes their use by getting data from multiple sources, managing Earth models and meshes, distilling them as input data, and capturing results with meta-data. The dispel4py data-intensive framework allows for developing data-analysis applications using Python and the ObsPy library, which can be executed on different DCIs. A set of tools allows coupling with seismology and external data services. Provenance driven tools validate results and show relationships between data to facilitate method improvement. Lessons learned from VERCE training lead us to conclude that solid-Earth scientists could make significant progress by using VERCE e-science environment. VERCE has already contributed to the European Plate Observation System (EPOS), and is part of the EPOS implementation phase. Its cross-disciplinary capabilities are being extended 9. Electromagnetic study of lithospheric structure in Trans-European Suture Zone in Poland Science.gov (United States) Jóźwiak, Waldemar; Ślęzak, Katarzyna; Nowożyński, Krzysztof; Neska, Anne 2016-04-01 The area covered by magnetotelluric surveys in Poland is mostly related to the Trans-European Suture Zone (TESZ), the largest tectonic boundary in Europe. Numerous 1D, 2D, and pseudo-3D and 3D models of the electrical resistivity distribution were constructed, and a new interpretation method based on Horizontal Magnetic Tensor analysis has been applied recently. The results indicate that the TESZ is a lithospheric discontinuity and there are noticeable differences in geoelectric structures between the East European Craton (EEC), the transitional zone (TESZ), and the Paleozoic Platform (PP). The electromagnetic sounding is a very efficient tool for recognizing the lithospheric structure especially it helps in identification of important horizontal (or lateral) inhomogeneities in the crust. Due to our study we can clearly determine the areas of the East European Craton of high resistivity, Paleozoic Platform of somewhat lower resistivity value, and transitional TESZ of complicated structure. At the East European Craton, we observe very highly resistive lithosphere, reaching 220-240 km depth. Underneath, there is distinctly greater conductivity values, most probably resulting from partial melting of rocks; this layer may represent the asthenosphere. The resistivity of the lithosphere under the Paleozoic Platform is somewhat lower, and its thickness does not exceed 150 km. The properties of the lithosphere in the transition zone, under the TESZ, differ significantly. The presented models include prominent, NW-SE striking conductive lineaments. These structures, that related with the TESZ, lie at a depth of 10-30 km. They are located in a mid-crustal level and they reach the boundary of the EEC. The structures we initially connect to the Variscan Deformation Front (VDF) and the Caledonian Deformation Front (CDF). The differentiation of conductivity visible in the crust continues in the upper mantle. 10. The lithosphere-asthenosphere boundary beneath the Korean Peninsula from S receiver functions Science.gov (United States) Lee, S. H.; Rhie, J. 2017-12-01 The shallow lithosphere in the Eastern Asia at the east of the North-South Gravity Lineament is well published. The reactivation of the upper asthenosphere induced by the subducting plates is regarded as a dominant source of the lithosphere thinning. Additionally, assemblage of various tectonic blocks resulted in complex variation of the lithosphere thickness in the Eastern Asia. Because, the Korean Peninsula located at the margin of the Erasian Plate in close vicinity to the trench of subducting oceanic plate, significant reactivation of the upper asthenosphere is expected. For the study of the tectonic history surrounding the Korean Peninsula, we determined the lithosphere-asthenosphere boundary (LAB) beneath the Korean Peninsula using common conversion point stacking method with S receiver functions. The depth of the LAB beneath the Korean Peninsula ranges from 60 km to 100 km and confirmed to be shallower than that expected for Cambrian blocks as previous global studies. The depth of the LAB is getting shallower to the south, 95 km at the north and 60 km at the south. And rapid change of the LAB depth is observed between 36°N and 37°N. The depth change of the LAB getting shallower to the south implies that the source of the lithosphere thinning is a hot mantle upwelling induced by the northward subduction of the oceanic plates since Mesozoic. Unfortunately, existing tectonic models can hardly explain the different LAB depth in the north and in the south as well as the rapid change of the LAB depth. 11. Earthquake Source Depths in the Zagros Mountains: A "Jelly Sandwich" or "Creme Brulee" Lithosphere? Science.gov (United States) 2006-12-01 The Zagros Mountain Belt of southwestern Iran is one of the most seismically active mountain belts in the world. Previous studies of the depth distribution of earthquakes in this region have shown conflicting results. Early seismic studies of teleseismically recorded events found that earthquakes in the Zagros Mountains nucleated within both the upper crust and upper mantle, indicating that the lithosphere underlying the Zagros Mountains has a strong upper crust and a strong lithospheric mantle, separated by a weak lower crust. Such a model of lithospheric structure is called the "Jelly Sandwich" model. More recent teleseismic studies, however, found that earthquakes in the Zagros Mountains occur only within the upper crust, thus indicating that the strength of the Zagros Mountains' lithosphere is primarily isolated to the upper crust. This model of lithospheric structure is called the "crème brûlée" model. Analysis of regionally recorded earthquakes nucleating within the Zagros Mountains is presented here. Data primarily come from the Saudi Arabian National Digital Seismic Network, although data sources include many regional open and closed networks. The use of regionally recorded earthquakes facilitates the analysis of a larger dataset than has been used in previous teleseismic studies. Regional waveforms have been inverted for source parameters using a range of potential source depths to determine the best fitting source parameters and depths. Results indicate that earthquakes nucleate in two distinct zones. One seismogenic zone lies at shallow, upper crustal depths. The second seismogenic zone lies near the Moho. Due to uncertainty in the source and Moho depths, further study is needed to determine whether these deeper events are nucleating within the lower crust or the upper mantle. 12. Effect of the lithospheric thermal state on the Moho interface: A case study in South America Science.gov (United States) Bagherbandi, Mohammad; Bai, Yongliang; Sjöberg, Lars E.; Tenzer, Robert; Abrehdary, Majid; Miranda, Silvia; Alcacer Sanchez, Juan M. 2017-07-01 Gravimetric methods applied for Moho recovery in areas with sparse and irregular distribution of seismic data often assume only a constant crustal density. Results of latest studies, however, indicate that corrections for crustal density heterogeneities could improve the gravimetric result, especially in regions with a complex geologic/tectonic structure. Moreover, the isostatic mass balance reflects also the density structure within the lithosphere. The gravimetric methods should therefore incorporate an additional correction for the lithospheric mantle as well as deeper mantle density heterogeneities. Following this principle, we solve the Vening Meinesz-Moritz (VMM) inverse problem of isostasy constrained by seismic data to determine the Moho depth of the South American tectonic plate including surrounding oceans, while taking into consideration the crustal and mantle density heterogeneities. Our numerical result confirms that contribution of sediments significantly modifies the estimation of the Moho geometry especially along the continental margins with large sediment deposits. To account for the mantle density heterogeneities we develop and apply a method in order to correct the Moho geometry for the contribution of the lithospheric thermal state (i.e., the lithospheric thermal-pressure correction). In addition, the misfit between the isostatic and seismic Moho models, attributed mainly to deep mantle density heterogeneities and other geophysical phenomena, is corrected for by applying the non-isostatic correction. The results reveal that the application of the lithospheric thermal-pressure correction improves the RMS fit of the VMM gravimetric Moho solution to the CRUST1.0 (improves ∼ 1.9 km) and GEMMA (∼1.1 km) models and the point-wise seismic data (∼0.7 km) in South America. 13. Thermodynamic, geophysical and rheological modeling of the lithosphere underneath the North Atlantic Porcupine Basin (Ireland). Science.gov (United States) Botter, C. D.; Prada, M.; Fullea, J. 2017-12-01 The Porcupine is a North-South oriented basin located southwest of Ireland, along the North Atlantic continental margin, formed by several rifting episodes during Late Carboniferous to Early Cretaceous. The sedimentary cover is underlined by a very thin continental crust in the center of the basin (10 in the South. In spite of the abundant literature, most of the oil and gas exploration in the Porcupine Basin has been targeting its northern part and is mostly restricted to relatively shallow depths, giving a restrained overview of the basin structure. Therefore, studying the thermodynamic and composition of the deep and broader structures is needed to understand the processes linked to the formation and the symmetry signature of the basin. Here, we model the present-day thermal and compositional structure of the continental crust and lithospheric mantle underneath the Porcupine basin using gravity, seismic, heat flow and elevation data. We use an integrated geophysical-petrological framework where most relevant rock properties (density, seismic velocities) are determined as a function of temperature, pressure and composition. Our modelling approach solves simultaneously the heat transfer, thermodynamic, geopotential, seismic and isostasy equations, and fit the results to all available geophysical and petrological observables (LitMod software). In this work we have implemented a module to compute self-consistently a laterally variable lithospheric elastic thickness based on mineral physics rheological laws (yield strength envelopes over the 3D volume). An appropriate understanding of local and flexural isostatic behavior of the basin is essential to unravel its tectonic history (i.e. stretching factors, subsidence etc.). Our Porcupine basin 3D model is defined by four lithological layers, representing properties from post- and syn-rift sequences to the lithospheric mantle. The computed yield strength envelopes are representative of hyperextended lithosphere and 14. Comprehensive analysis of Curie-point depths and lithospheric effective elastic thickness at Arctic Region Science.gov (United States) Lu, Y.; Li, C. F. 2017-12-01 Arctic Ocean remains at the forefront of geological exploration. Here we investigate its deep geological structures and geodynamics on the basis of gravity, magnetic and bathymetric data. We estimate Curie-point depth and lithospheric effective elastic thickness to understand deep geothermal structures and Arctic lithospheric evolution. A fractal exponent of 3.0 for the 3D magnetization model is used in the Curie-point depth inversion. The result shows that Curie-point depths are between 5 and 50 km. Curie depths are mostly small near the active mid-ocean ridges, corresponding well to high heat flow and active shallow volcanism. Large curie depths are distributed mainly at continental marginal seas around the Arctic Ocean. We present a map of effective elastic thickness (Te) of the lithosphere using a multitaper coherence technique, and Te are between 5 and 110 km. Te primarily depends on geothermal gradient and composition, as well as structures in the lithosphere. We find that Te and Curie-point depths are often correlated. Large Te are distributed mainly at continental region and small Te are distributed at oceanic region. The Alpha-Mendeleyev Ridge (AMR) and The Svalbard Archipelago (SA) are symmetrical with the mid-ocean ridge. AMR and SA were formed before an early stage of Eurasian basin spreading, and they are considered as conjugate large igneous provinces, which show small Te and Curie-point depths. Novaya Zemlya region has large Curie-point depths and small Te. We consider that fault and fracture near the Novaya Zemlya orogenic belt cause small Te. A series of transform faults connect Arctic mid-ocean ridge with North Atlantic mid-ocean ridge. We can see large Te near transform faults, but small Curie-point depths. We consider that although temperature near transform faults is high, but mechanically the lithosphere near transform faults are strengthened. 15. Elysium region, mars: Tests of lithospheric loading models for the formation of tectonic features International Nuclear Information System (INIS) Hall, J.L.; Solomon, S.C.; Head, J.W. 1986-01-01 16. Lithospheric Strength and Stress State: Persistent Challenges and New Directions in Geodynamics Science.gov (United States) Hirth, G. 2017-12-01 The strength of the lithosphere controls a broad array of geodynamic processes ranging from earthquakes, the formation and evolution of plate boundaries and the thermal evolution of the planet. A combination of laboratory, geologic and geophysical observations provides several independent constraints on the rheological properties of the lithosphere. However, several persistent challenges remain in the interpretation of these data. Problems related to extrapolation in both scale and time (rate) need to be addressed to apply laboratory data. Nonetheless, good agreement between extrapolation of flow laws and the interpretation of microstructures in viscously deformed lithospheric mantle rocks demonstrates a strong foundation to build on to explore the role of scale. Furthermore, agreement between the depth distribution of earthquakes and predictions based on extrapolation of high temperature friction relationships provides a basis to understand links between brittle deformation and stress state. In contrast, problems remain for rationalizing larger scale geodynamic processes with these same rheological constraints. For example, at face value the lab derived values for the activation energy for creep are too large to explain convective instabilities at the base of the lithosphere, but too low to explain the persistence of dangling slabs in the upper mantle. In this presentation, I will outline these problems (and successes) and provide thoughts on where new progress can be made to resolve remaining inconsistencies, including discussion of the role of the distribution of volatiles and alteration on the strength of the lithosphere, new data on the influence of pressure on friction and fracture strength, and links between the location of earthquakes, thermal structure, and stress state. 17. A Comparative Analysis of Seismological and Gravimetric Crustal Thicknesses below the Andean Region with Flat Subduction of the Nazca Plate Directory of Open Access Journals (Sweden) Mario E. Gimenez 2009-01-01 Full Text Available A gravimetric study was carried out in a region of the Central Andean Range between 28∘ and 32∘ south latitudes and from 72∘ and 66∘ west longitudes. The seismological and gravimetrical Moho models were compared in a sector which coincides with the seismological stations of the CHARGE project. The comparison reveals discrepancies between the gravity Moho depths and those obtained from seismological investigations (CHARGE project, the latter giving deeper values than those resulting from the gravimetric inversion. These discrepancies are attenuated when the positive gravimetric effect of the Nazca plate is considered. Nonetheless, a small residuum of about 5 km remains beneath the Cuyania terrane region, to the east of the main Andean chain. This residuum could be gravimetrically justified if the existence of a high density or eclogitized portion of the lower crust is considered. This result differed from the interpretations from Project “CHARGE” which revealed that the entire inferior crust extending from the Precordillera to the occidental “Sierras Pampeanas” could be “eclogitized”. In this same sector, we calculated the effective elastic thickness (Te of the crust. These results indicated an anomalous value of Te = 30 km below the Cuyania terrane. This is further conclusive evidence of the fact that the Cuyania terrane is allochthonous, for which also geological evidences exist. 18. SEISMOLOGY OF A LARGE SOLAR CORONAL LOOP FROM EUVI/STEREO OBSERVATIONS OF ITS TRANSVERSE OSCILLATION International Nuclear Information System (INIS) Verwichte, E.; Van Doorsselaere, T.; Foullon, C.; Nakariakov, V. M.; Aschwanden, M. J. 2009-01-01 The first analysis of a transverse loop oscillation observed by both Solar TErrestrial RElations Observatories (STEREO) spacecraft is presented, for an event on the 2007 June 27 as seen by the Extreme Ultraviolet Imager (EUVI). The three-dimensional loop geometry is determined using a three-dimensional reconstruction with a semicircular loop model, which allows for an accurate measurement of the loop length. The plane of wave polarization is found from comparison with a simulated loop model and shows that the oscillation is a fundamental horizontally polarized fast magnetoacoustic kink mode. The oscillation is characterized using an automated method and the results from both spacecraft are found to match closely. The oscillation period is 630 ± 30 s and the damping time is 1000 ± 300 s. Also, clear intensity variations associated with the transverse loop oscillations are reported for the first time. They are shown to be caused by the effect of line-of-sight integration. The Alfven speed and coronal magnetic field derived using coronal seismology are discussed. This study shows that EUVI/STEREO observations achieve an adequate accuracy for studying long-period, large-amplitude transverse loop oscillations. 19. USING HINODE/EXTREME-ULTRAVIOLET IMAGING SPECTROMETER TO CONFIRM A SEISMOLOGICALLY INFERRED CORONAL TEMPERATURE International Nuclear Information System (INIS) Marsh, M. S.; Walsh, R. W. 2009-01-01 The Extreme-Ultraviolet Imaging Spectrometer on board the HINODE satellite is used to examine the loop system described in Marsh et al. by applying spectroscopic diagnostic methods. A simple isothermal mapping algorithm is applied to determine where the assumption of isothermal plasma may be valid, and the emission measure locii technique is used to determine the temperature profile along the base of the loop system. It is found that, along the base, the loop has a uniform temperature profile with a mean temperature of 0.89 ± 0.09 MK which is in agreement with the temperature determined seismologically in Marsh et al., using observations interpreted as the slow magnetoacoustic mode. The results further strengthen the slow mode interpretation, propagation at a uniform sound speed, and the analysis method applied in Marsh et al. It is found that it is not possible to discriminate between the slow mode phase speed and the sound speed within the precision of the present observations. 20. Tidal and seasonal variations in calving flux observed with passive seismology Science.gov (United States) Bartholomaus, T.C.; Larsen, Christopher F.; West, Michael E.; O'Neel, Shad; Pettit, Erin C.; Truffer, Martin 2015-01-01 The seismic signatures of calving events, i.e., calving icequakes, offer an opportunity to examine calving variability with greater precision than is available with other methods. Here using observations from Yahtse Glacier, Alaska, we describe methods to detect, locate, and characterize calving icequakes. We combine these icequake records with a coincident, manually generated record of observed calving events to develop and validate a statistical model through which we can infer iceberg sizes from the properties of calving icequakes. We find that the icequake duration is the single most significant predictor of an iceberg's size. We then apply this model to 18 months of seismic recordings and find elevated iceberg calving flux during the summer and fall and a pronounced lull in calving during midwinter. Calving flux is sensitive to semidiurnal tidal stage. Large calving events are tens of percent more likely during falling and low tides than during rising and high tides, consistent with a view that deeper water has a stabilizing influence on glacier termini. Multiple factors affect the occurrence of mechanical fractures that ultimately lead to iceberg calving. At Yahtse Glacier, seismology allows us to demonstrate that variations in the rate of submarine melt are a dominant control on iceberg calving rates at seasonal timescales. On hourly to daily timescales, tidal modulation of the normal stress against the glacier terminus reveals the nonlinear glacier response to changes in the near-terminus stress field. 1. Conception and test of Echoes, a spectro-imager dedicated to the seismology of Jupiter Science.gov (United States) Soulat, L.; Schmider, F.-X.; Robbe-Dubois, S.; Appourchaux, T.; Gaulme, P.; Bresson, Y.; Gay, J.; Daban, J.-B.; Gouvret, C. 2017-11-01 Echoes is a project of a spaceborne Doppler Spectro-Imager (DSI) which has been proposed as payload to the JUICE mission project selected in the Cosmic Vision program of the European Space Agency (ESA). It is a Fourier transform spectrometer which measures phase shifts in the interference patterns induced by Doppler shifts of spectral lines reflected at the surface of the planet. Dedicated to the seismology of Jupiter, the instrument is designed to analyze the periodic movements induced by internal acoustic modes of the planet. It will allow the knowledge of the internal structure of Jupiter, in particular of the central region, which is essential for the comprehension of the scenario of the giant planets' formation. The optical design is based on a modified Mach-Zehnder interferometer operating in the visible domain and takes carefully into account the sensitivity of the optical path difference to the temperature. The instrument produces simultaneously four images in quadrature which allows the measurement of the phase without being contaminated by the continuum component of the incident light. We expect a noise level less than 1 cm2s-2µHz-1 in the frequency range [0.5 -10] mHz. In this paper, we present the prototype implemented at the Observatoire de la Côte d'Azur (OCA) in collaboration with Institut d'Astrophysique Spatiale (IAS) to study the real performances in laboratory and to demonstrate the capability to reach the required Technology Readiness Level 5. 2. Interaction Between Downwelling Flow and the Laterally-Varying Thickness of the North American Lithosphere Inferred from Seismic Anisotropy Science.gov (United States) Behn, M. D.; Conrad, C. P.; Silver, P. G. 2005-12-01 Shear flow in the asthenosphere tends to align olivine crystals in the direction of shear, producing a seismically anisotropic asthenosphere that can be detected using a number of seismic techniques (e.g., shear-wave splitting (SWS) and surface waves). In the ocean basins, where the asthenosphere has a relatively uniform thickness and lithospheric anisotropy appears to be small, observed azimuthal anisotropy is well fit by asthenospheric shear flow in global flow models driven by a combination of plate motions and mantle density heterogeneity. In contrast, beneath the continents both the lithospheric ceiling and asthenospheric thickness may vary considerably across cratonic regions and ocean-continent boundaries. To examine the influence of a continental lithosphere with variable thickness on predictions of continental seismic anisotropy, we impose lateral variations in lithospheric viscosity in global models of mantle flow driven by plate motions and mantle density heterogeneity. For the North American continent, the Farallon slab descends beneath a deep cratonic root, producing downwelling flow in the upper mantle and convergent flow beneath the cratonic lithosphere. We evaluate both the orientation of the predicted azimuthal anisotropy and the depth dependence of radial anisotropy for this downwelling flow and find that the inclusion of a strong continental root provides an improved fit to observed SWS observations beneath the North American craton. Thus, we hypothesize that at least some continental anisotropy is associated with sub-lithospheric viscous shear, although fossil anisotropy in the lithospheric layer may also contribute significantly. Although we do not observe significant variations in the direction of predicted anisotropy with depth, we do find that the inclusion of deep continental roots pushes the depth of the anisotropy layer deeper into the upper mantle. We test several different models of laterally-varying lithosphere and asthenosphere 3. Preservation of an Archaean whole rock Re-Os isochron for the Venetia lithospheric mantle: Evidence for rapid crustal recycling and lithosphere stabilisation at 3.3 Ga Science.gov (United States) van der Meer, Quinten H. A.; Klaver, Martijn; Reisberg, Laurie; Riches, Amy J. V.; Davies, Gareth R. 2017-11-01 Re-Os and platinum group element analyses are reported for peridotite xenoliths from the 533 Ma Venetia kimberlite cluster situated in the Limpopo Mobile Belt, the Neoarchaean collision zone between the Kaapvaal and Zimbabwe Cratons. The Venetian xenoliths provide a rare opportunity to examine the state of the cratonic lithosphere prior to major regional metasomatic disturbance of Re-Os systematics throughout the Phanerozoic. The 32 studied xenoliths record Si-enrichment that is characteristic of the Kaapvaal lithospheric mantle and can be subdivided into five groups based on Re-Os analyses. The most pristine group I samples (n = 13) display an approximately isochronous relationship and fall on a 3.28 ± 0.17 Ga (95 % conf. int.) reference line that is based on their mean TMA age. This age overlaps with the formation age of the Limpopo crust at 3.35-3.28 Ga. The group I samples derive from ∼50 to ∼170 km depth, suggesting coeval melt depletion of the majority of the Venetia lithospheric mantle column. Group II and III samples have elevated Re/Os due to Re addition during kimberlite magmatism. Group II has otherwise undergone a similar evolution as the group I samples with overlapping 187Os/188Os at eruption age: 187Os/188OsEA, while group III samples have low Os concentrations, unradiogenic 187Os/188OsEA and were effectively Re-free prior to kimberlite magmatism. The other sample groups (IV and V) have disturbed Re-Os systematics and provide no reliable age information. A strong positive correlation is recorded between Os and Re concentrations for group I samples, which is extended to groups II and III after correction for kimberlite addition. This positive correlation precludes a single stage melt depletion history and indicates coupled remobilisation of Re and Os. The combination of Re-Os mobility, preservation of the isochronous relationship, correlation of 187Os/188Os with degree of melt depletion and lack of radiogenic Os addition puts tight constraints on 4. Rheological structure of the lithosphere in plate boundary strike-slip fault zones Science.gov (United States) Chatzaras, Vasileios; Tikoff, Basil; Kruckenberg, Seth C.; Newman, Julie; Titus, Sarah J.; Withers, Anthony C.; Drury, Martyn R. 2016-04-01 How well constrained is the rheological structure of the lithosphere in plate boundary strike-slip fault systems? Further, how do lithospheric layers, with rheologically distinct behaviors, interact within the strike-slip fault zones? To address these questions, we present rheological observations from the mantle sections of two lithospheric-scale, strike-slip fault zones. Xenoliths from ˜40 km depth (970-1100 ° C) beneath the San Andreas fault system (SAF) provide critical constraints on the mechanical stratification of the lithosphere in this continental transform fault. Samples from the Bogota Peninsula shear zone (BPSZ, New Caledonia), which is an exhumed oceanic transform fault, provide insights on lateral variations in mantle strength and viscosity across the fault zone at a depth corresponding to deformation temperatures of ˜900 ° C. Olivine recrystallized grain size piezometry suggests that the shear stress in the SAF upper mantle is 5-9 MPa and in the BPSZ is 4-10 MPa. Thus, the mantle strength in both fault zones is comparable to the crustal strength (˜10 MPa) of seismogenic strike-slip faults in the SAF system. Across the BPSZ, shear stress increases from 4 MPa in the surrounding rocks to 10 MPa in the mylonites, which comprise the core of the shear zone. Further, the BPSZ is characterized by at least one order of magnitude difference in the viscosity between the mylonites (1018 Paṡs) and the surrounding rocks (1019 Paṡs). Mantle viscosity in both the BPSZ mylonites and the SAF (7.0ṡ1018-3.1ṡ1020 Paṡs) is relatively low. To explain our observations from these two strike-slip fault zones, we propose the "lithospheric feedback" model in which the upper crust and lithospheric mantle act together as an integrated system. Mantle flow controls displacement and the upper crust controls the stress magnitude in the system. Our stress data combined with data that are now available for the middle and lower crustal sections of other transcurrent fault 5. Lithospheric expression of geological units in central and eastern North America from full waveform tomography Science.gov (United States) Yuan, Huaiyu; French, Scott; Cupillard, Paul; Romanowicz, Barbara 2014-09-01 The EarthScope TA deployment has provided dense array coverage throughout the continental US and with it, the opportunity for high resolution 3D seismic velocity imaging of both lithosphere and asthenosphere in the continent. Building upon our previous long-period waveform tomographic modeling in North America, we present a higher resolution 3D isotropic and radially anisotropic shear wave velocity model of the North American lithospheric mantle, constructed tomographically using the spectral element method for wavefield computations and waveform data down to 40 s period. The new model exhibits pronounced spatial correlation between lateral variations in seismic velocity and anisotropy and major tectonic units as defined from surface geology. In the center of the continent, the North American craton exhibits uniformly thick lithosphere down to 200-250 km, while major tectonic sutures of Proterozoic age visible in the surface geology extend down to 100-150 km as relatively narrow zones of distinct radial anisotropy, with Vsv >Vsh. Notably, the upper mantle low velocity zone is present everywhere under the craton between 200 and 300 km depth. East of the continental rift margin, the lithosphere is broken up into a series of large, somewhat thinner (150 km) high velocity blocks, which extend laterally 200-300 km offshore into the Atlantic Ocean. Between the craton and these deep-rooted blocks, we find a prominent narrow band of low velocities that roughly follows the southern and eastern Laurentia rift margin and extends into New England. We suggest that the lithosphere along this band of low velocities may be thinned due to the combined effects of repeated rifting processes and northward extension of the hotspot related Bermuda low-velocity channel across the New England region. We propose that the deep rooted high velocity blocks east of the Laurentia margin represent the Proterozoic Gondwanian terranes of pan-African affinity, which were captured during the Rodinia 6. Gravity signals from the lithosphere in the Central European Basin System Science.gov (United States) Yegorova, T.; Bayer, U.; Thybo, H.; Maystrenko, Y.; Scheck-Wenderoth, M.; Lyngsie, S. B. 2007-01-01 We study the gravity signals from different depth levels in the lithosphere of the Central European Basin System (CEBS). The major elements of the CEBS are the Northern and Southern Permian Basins which include the Norwegian-Danish Basin (NDB), the North-German Basin (NGB) and the Polish Trough (PT). An up to 10 km thick sedimentary cover of Mesozoic-Cenozoic sediments, hides the gravity signal from below the basin and masks the heterogeneous structure of the consolidated crust, which is assumed to be composed of domains that were accreted during the Paleozoic amalgamation of Europe. We performed a three-dimensional (3D) gravity backstripping to investigate the structure of the lithosphere below the CEBS. Residual anomalies are derived by removing the effect of sediments down to the base of Permian from the observed field. In order to correct for the influence of large salt structures, lateral density variations are incorporated. These sediment-free anomalies are interpreted to reflect Moho relief and density heterogeneities in the crystalline crust and uppermost mantle. The gravity effect of the Moho relief compensates to a large extent the effect of the sediments in the CEBS and in the North Sea. Removal of the effects of large-scale crustal inhomogeneities shows a clear expression of the Variscan arc system at the southern part of the study area and the old crust of Baltica further north-east. The remaining residual anomalies (after stripping off the effects of sediments, Moho topography and large-scale crustal heterogeneities) reveal long wavelength anomalies, which are caused mainly by density variations in the upper mantle, though gravity influence from the lower crust cannot be ruled out. They indicate that the three main subbasins of the CEBS originated on different lithospheric domains. The PT originated on a thick, strong and dense lithosphere of the Baltica type. The NDB was formed on a weakened Baltica low-density lithosphere formed during the Sveco 7. Earth's evolving subcontinental lithospheric mantle: inferences from LIP continental flood basalt geochemistry Science.gov (United States) Greenough, John D.; McDivitt, Jordan A. 2018-04-01 Archean and Proterozoic subcontinental lithospheric mantle (SLM) is compared using 83 similarly incompatible element ratios (SIER; minimally affected by % melting or differentiation, e.g., Rb/Ba, Nb/Pb, Ti/Y) for >3700 basalts from ten continental flood basalt (CFB) provinces representing nine large igneous provinces (LIPs). Nine transition metals (TM; Fe, Mn, Sc, V, Cr, Co, Ni, Cu, Zn) in 102 primitive basalts (Mg# = 0.69-0.72) from nine provinces yield additional SLM information. An iterative evaluation of SIER values indicates that, regardless of age, CFB transecting Archean lithosphere are enriched in Rb, K, Pb, Th and heavy REE(?); whereas P, Ti, Nb, Ta and light REE(?) are higher in Proterozoic-and-younger SLM sources. This suggests efficient transfer of alkali metals and Pb to the continental lithosphere perhaps in association with melting of subducted ocean floor to form Archean tonalite-trondhjemite-granodiorite terranes. Titanium, Nb and Ta were not efficiently transferred, perhaps due to the stabilization of oxide phases (e.g., rutile or ilmenite) in down-going Archean slabs. CFB transecting Archean lithosphere have EM1-like SIER that are more extreme than seen in oceanic island basalts (OIB) suggesting an Archean SLM origin for OIB-enriched mantle 1 (EM1). In contrast, OIB high U/Pb (HIMU) sources have more extreme SIER than seen in CFB provinces. HIMU may represent subduction-processed ocean floor recycled directly to the convecting mantle, but to avoid convective homogenization and produce its unique Pb isotopic signature may require long-term isolation and incubation in SLM. Based on all TM, CFB transecting Proterozoic lithosphere are distinct from those cutting Archean lithosphere. There is a tendency for lower Sc, Cr, Ni and Cu, and higher Zn, in the sources for Archean-cutting CFB and EM1 OIB, than Proterozoic-cutting CFB and HIMU OIB. All CFB have SiO2 (pressure proxy)-Nb/Y (% melting proxy) relationships supporting low pressure, high % melting 8. Spatial variations of effective elastic thickness of the Lithosphere in the Southeast Asia regions Science.gov (United States) Shi, Xiaobin; Kirby, Jon; Yu, Chuanhai; Swain, Chris; Zhao, Junfeng 2016-04-01 The effective elastic thickness Te corresponds to the thickness of an idealized elastic beam that would bend similarly to the actual lithosphere under the same applied loads, and could provide important insight into rheology and state of stress. Thus, it is helpful to improve our understanding of the relationship between tectonic styles, distribution of earthquakes and lithospheric rheology in various tectonic settings. The Southeast Asia, located in the southeastern part of the Eurasian Plate, comprises a complex collage of continental fragments, volcanic arcs, and suture zones and marginal oceanic basins, and is surrounded by tectonically active margins which exhibit intense seismicity and volcanism. The Cenozoic southeastward extrusion of the rigid Indochina Block due to the Indo-Asian collision resulted in the drastic surface deformation in the western area. Therefore, a high resolution spatial variation map of Te might be a useful tool for the complex Southeast Asia area to examine the relationships between surface deformation, earthquakes, lithospheric structure and mantle dynamics. In this study, we present a high-resolution map of spatial variations of Te in the Southeast Asia area using the wavelet method, which convolves a range of scaled wavelets with the two data sets of Bouguer gravity anomaly and topography. The topography and bathymetry grid data was extracted from the GEBCO_08 Grid of GEBCO digital atlas. The pattern of Te variations agrees well with the tectonic provinces in the study area. On the whole, low lithosphere strength characterizes the oceanic basins, such as the South China Sea, the Banda sea area, the Celebes Sea, the Sulu Sea and the Andaman Sea. Unlike the oceanic basins, the continental fragments show a complex pattern of Te variations. The Khorat plateau and its adjacent area show strong lithosphere characteristics with a Te range of 20-50 km, suggesting that the Khorat plateau is the strong core of the Indochina Block. The West 9. Dispel4py: An Open-Source Python library for Data-Intensive Seismology Science.gov (United States) Filgueira, Rosa; Krause, Amrey; Spinuso, Alessandro; Klampanos, Iraklis; Danecek, Peter; Atkinson, Malcolm 2015-04-01 Scientific workflows are a necessary tool for many scientific communities as they enable easy composition and execution of applications on computing resources while scientists can focus on their research without being distracted by the computation management. Nowadays, scientific communities (e.g. Seismology) have access to a large variety of computing resources and their computational problems are best addressed using parallel computing technology. However, successful use of these technologies requires a lot of additional machinery whose use is not straightforward for non-experts: different parallel frameworks (MPI, Storm, multiprocessing, etc.) must be used depending on the computing resources (local machines, grids, clouds, clusters) where applications are run. This implies that for achieving the best applications' performance, users usually have to change their codes depending on the features of the platform selected for running them. This work presents dispel4py, a new open-source Python library for describing abstract stream-based workflows for distributed data-intensive applications. Special care has been taken to provide dispel4py with the ability to map abstract workflows to different platforms dynamically at run-time. Currently dispel4py has four mappings: Apache Storm, MPI, multi-threading and sequential. The main goal of dispel4py is to provide an easy-to-use tool to develop and test workflows in local resources by using the sequential mode with a small dataset. Later, once a workflow is ready for long runs, it can be automatically executed on different parallel resources. dispel4py takes care of the underlying mappings by performing an efficient parallelisation. Processing Elements (PE) represent the basic computational activities of any dispel4Py workflow, which can be a seismologic algorithm, or a data transformation process. For creating a dispel4py workflow, users only have to write very few lines of code to describe their PEs and how they are 10. QuakeML: Status of the XML-based Seismological Data Exchange Format Science.gov (United States) Euchner, Fabian; Schorlemmer, Danijel; Kästli, Philipp; Quakeml Working Group 2010-05-01 QuakeML is an XML-based data exchange standard for seismology that is in its fourth year of active community-driven development. The current release (version 1.2) is based on a public Request for Comments process that included contributions from ETH, GFZ, USC, SCEC, USGS, IRIS DMC, EMSC, ORFEUS, GNS, ZAMG, BRGM, Nanometrics, and ISTI. QuakeML has mainly been funded through the EC FP6 infrastructure project NERIES, in which it was endorsed as the preferred data exchange format. Currently, QuakeML services are being installed at several institutions around the globe, including EMSC, ORFEUS, ETH, Geoazur (Europe), NEIC, ANSS, SCEC/SCSN (USA), and GNS Science (New Zealand). Some of these institutions already provide QuakeML earthquake catalog web services. Several implementations of the QuakeML data model have been made. QuakePy, an open-source Python-based seismicity analysis toolkit using the QuakeML data model, is being developed at ETH. QuakePy is part of the software stack used in the Collaboratory for the Study of Earthquake Predictability (CSEP) testing center installations, developed by SCEC. Furthermore, the QuakeML data model is part of the SeisComP3 package from GFZ Potsdam. QuakeML is designed as an umbrella schema under which several sub-packages are collected. The present scope of QuakeML 1.2 covers a basic description of seismic events including picks, arrivals, amplitudes, magnitudes, origins, focal mechanisms, and moment tensors. Work on additional packages (macroseismic information, seismic inventory, and resource metadata) has been started, but is at an early stage. Contributions from the community that help to widen the thematic coverage of QuakeML are highly welcome. Online resources: http://www.quakeml.org, http://www.quakepy.org 11. Enhancing Outreach using Social Networks at the National Seismological Network of Costa Rica Science.gov (United States) 2014-12-01 Costa Rica has a very high seismicity rate and geological processes are part of everyday life. Traditionally, information about these processes has been provided by conventional mass media (television and radio). However, due to the new trends in information flow a new approach towards Science Education is necessary for transmitting knowledge from scientific research for the general public in Costa Rica. Since 1973, the National Seismological Network of Costa Rica (RSN: UCR-ICE) studies the seismicity and volcanic activity in the country. In this study, we describe the different channels to report earthquake information that the RSN is currently using: email, social networks, and a website, as well as the development of a smartphone application. Since the RSN started actively participating in Social Networks, an increase in awareness in the general public has been noticed particularly regarding felt earthquakes. Based on this trend, we have focused on enhancing public outreach through Social Media. We analyze the demographics and geographic distribution of the RSN Facebook Page, the growth of followers, and the significance of their feedback for reporting intensity data. We observe that certain regions of the country have more Facebook activity, although those regions are not the most populated nor have a high Internet connectivity index. We interpret this pattern as the result of a higher awareness to geological hazards in those specific areas. We noticed that the growth of RSN users on Facebook has a strong correlation with the seismic events as opposed to Twitter that displays a steady growth with no clear correlations with specific seismic events. We see the Social Networks as opportunities to engage non-science audiences and encourage the population to participate in reporting seismic observations, thus providing intensity data. With the increasing access to Internet from mobile phones in Costa Rica, we see this approach to science education as an opportunity 12. Seismological studies carried out by the CEA in connection with the safety of nuclear sites International Nuclear Information System (INIS) Barbreau, A.; Ferrieux, H.; Mohammadioun, B. 1975-01-01 In order to evaluate the seismic risk at nuclear sites, the Department of Nuclear Safety of the French Atomic Energy Commission (CEA) has been conducting a programme of seismological studies for several years past. This programme is aimed at acquiring a better knowledge of seismic phenomena, in particular the spectral distribution of the energy of earthquakes, considered to be the only correct approach to the problem of earthquake protection, as well as a better knowledge of the seismic activity of the areas surrounding nuclear sites. The authors propose defining the design spectrum of the site on the basis of the probable energy at the source, the distance from the epicentre and the transfer function of the geological formations. The need - for the purpose of defining this spectrum - to acquire data on the characteristics of French earthquakes and on regional seismicity led the Department of Nuclear Safety to set up a network of seismic stations. It now has an observatory at the Cadarache Nuclear Research Centre and mobile stations with automatic magnetic recording for studying aftershock sequences and the activity of faults in the vicinity of nuclear sites, and for making the measurements necessary to calculate the transfer functions. With this equipment it was possible to record six aftershocks of the Oleron earthquake on 7 September 1972 close to the epicentre, and to calculate the spectra therefrom. The latter contained a lot of high frequencies, which is in agreement with the data obtained from other sources for earthquakes of low energy. The synthetic spectra calculated on the basis of one magnitude and one distance are in good agreement with the spectra obtained experimentally 13. Seismological investigation of September 09 2016, North Korea underground nuclear test Directory of Open Access Journals (Sweden) H. Gaber 2017-12-01 Full Text Available On Sep. 9, 2016, a seismic event of mb 5.3 took place in North Korea. This event was reported as a nuclear test. In this study, we applied a number of discriminant techniques that facilitate the ability to distinguish between explosions and earthquakes on the Korean Peninsula. The differences between explosions and earthquakes are due to variation in source dimension, epicenter depth and source mechanism, or a collection of them. There are many seismological differences between nuclear explosions and earthquakes, but not all of them are detectable at large distances or are appropriate to each earthquake and explosion. The discrimination methods used in the current study include the seismic source location, source depth, the differences in the frequency contents, complexity versus spectral ratio and Ms-mb differences for both earthquakes and explosions. Sep. 9, 2016, event is located in the region of North Korea nuclear test site at a zero depth, which is likely to be a nuclear explosion. Comparison between the P wave spectra of the nuclear test and the Sep. 8, 2000, North Korea earthquake, mb 4.9 shows that the spectrum of both events is nearly the same. The results of applying the theoretical model of Brune to P wave spectra of both explosion and earthquake show that the explosion manifests larger corner frequency than the earthquake, reflecting the nature of the different sources. The complexity and spectral ratio were also calculated from the waveform data recorded at a number of stations in order to investigate the relation between them. The observed classification percentage of this method is about 81%. Finally, the mb:Ms method is also investigated. We calculate mb and Ms for the Sep. 9, 2016, explosion and compare the result with the mb: Ms chart obtained from the previous studies. This method is working well with the explosion. Keywords: Discrimination, Seismic source location, Brune model, Spectral parameters, Complexity method, Mb: Ms 14. High-rate multi-GNSS: what does it mean to seismology? Science.gov (United States) Geng, J. 2017-12-01 GNSS precise point positioning (PPP) is capable of measuring centimeter-level positions epoch by epoch at a single station, and is thus treasured in tsunami/earthquake early warning where static displacements in the near field are critical to rapidly and reliably determining the magnitude of destructive events. However, most operational real-time PPP systems at present rely on only GPS data. The deficiency of such systems is that the high reliability and availability of precise displacements cannot be maintained continuously in real time, which is however a crucial requirement for disaster resistance and response. Multi-GNSS, including GLONASS, BeiDou, Galileo and QZSS other than only GPS, can be a solution to this problem because much more satellites per epoch (e.g. 30-40) will be available. In this case, positioning failure due to data loss or blunders can be minimized, and on the other hand, positioning initializations can be accelerated to a great extent since the satellite geometry for each epoch will be enhanced enormously. We established a prototype real-time multi-GNSS PPP service based on Asia-Pacific real-time network which can collect and stream high-rate data from all five navigation systems above. We estimated high-rate satellite clock corrections and enabled undifferenced ambiguity fixing for multi-GNSS, which therefore ensures high availability and reliability of precise displacement estimates in contrast to GPS-only systems. We will report how we can benefit from multi-GNSS for seismology, especially the noise characteristics of high-rate and sub-daily displacements. We will also use storm surge loading events to demonstrate the contribution of multi-GNSS to sub-daily transient signals. 15. Young solar-type stars evolution: the lithium and seismology contributions International Nuclear Information System (INIS) Piau, Laurent Eric 2001-01-01 This PhD thesis is devoted to young low-mass stars. We modeled many of them since their formation until the solar age covering the range between 0.65 and 1.4 solar masses and metallicity values ranging from -0.1 to 0.1 dex. The theoretical computations are related to observations in nearby open-clusters: Hyades, Pleiades... This comparison demonstrates that the lithium evolution is still poorly understood in such stars. In stellar interiors, this nuclide is destroyed by nuclear processes at low temperatures. Its surface abundance evolution traduces mixing phenomena between surface and deeper layers and therefore allows a direct insight into stellar structure and evolution. Both of which depend on microscopic and macroscopic physical phenomena whose effects we systematically examine. As regards microphysics we mainly concentrate upon changes in metallicity, in distribution among metals and their consequences on stellar opacity. We also address atmospheric models while the star still lies close to its Hayashi track. Accretion and convective parameters are the macroscopic phenomena we address during pre-main sequence. The rotational effects are considered along the entire evolution including the much realistic rotation laws. The last part of this PhD thesis makes use of seismology. Today this Discipline allows direct probing of the solar internal structure and motions. Its future application in the realm of stars will substantially improve their understanding. We derive here some relevant seismic variables for the understanding of stellar evolution. Then we show how this powerful tool permits to determine fundamental stellar parameters such as the mass or the helium fraction. (author) [fr 16. Rifting in heterogeneous lithosphere inferences from numerical modeling of the northern North Sea and the Oslo Graben. NARCIS (Netherlands) Pascal Candas, C.; Cloetingh, S.A.P.L. 2002-01-01 Permian rifting and magmatism are widely documented across NW Europe. The different Permian basins often display contrasting structural styles and evolved in lithospheric domains with contrasting past evolution and contrasting thermotectonic ages. In particular, the Oslo Graben and the northern 17. Simultaneous estimation of lithospheric uplift rates and absolute sea level change in southwest Scandinavia from inversion of sea level data DEFF Research Database (Denmark) Nielsen, Lars; Hansen, Jens Morten; Hede, Mikkel Ulfeldt 2014-01-01 the relative sea level data. Similar independent data do not exist for ancient times. The purpose of this study is to test two simple inversion approaches for simultaneous estimation of lithospheric uplift rates and absolute sea level change rates for ancient times in areas where a dense coverage of relative...... sea level data exists and well-constrained average lithospheric movement values are known from, for example glacial isostatic adjustment (GIA) models. The inversion approaches are tested and used for simultaneous estimation of lithospheric uplift rates and absolute sea level change rates in southwest...... Scandinavia from modern relative sea level data series that cover the period from 1900 to 2000. In both approaches, a priori information is required to solve the inverse problem. A priori information about the average vertical lithospheric movement in the area of interest is critical for the quality... 18. Mesoproterozoic and Paleoproterozoic subcontinental lithospheric mantle domains beneath southern Patagonia: Isotopic evidence for its connection to Africa and Antarctica Czech Academy of Sciences Publication Activity Database Mundl, A.; Ntaflos, T.; Ackerman, Lukáš; Bizimis, M.; Bjerg, E. A.; Hauzenberger, Ch. A. 2015-01-01 Roč. 43, č. 1 (2015), s. 39-42 ISSN 0091-7613 Institutional support: RVO:67985831 Keywords : lithospheric mantle * Mesoproterozoic * Paleoproterozoic Subject RIV: DD - Geochemistry Impact factor: 4.548, year: 2015 19. The Lu-Hf isotope composition of cratonic lithosphere: disequilibrium between garnet and clinopyroxene in kimberlite xenoliths NARCIS (Netherlands) Simon, N.S.C.; Carlson, R.W.; Pearson, D.G.; Davies, G.R. 2002-01-01 12th Annual V.M. Goldschmidt Conference Davos Switzerland, The Lu-Hf isotope composition of cratonic lithosphere: disequilibrium between garnet and clinopyroxene in kimberlite xenoliths (DTM, Carnegie Institution of Washington), Pearson, D.G. (University of Durham) 20. Fundamentals of converging mining technologies in integrated development of mineral resources of lithosphere Science.gov (United States) Trubetskoy, KN; Galchenko, YuP; Eremenko, VA 2018-03-01 The paper sets forth a theoretical framework for the strategy of the radically new stage in development of geotechnologies under conditions of rapidly aggravating environmental crisis of the contemporary technocratic civilization that utilizes the substance extracted from the lithosphere as the source of energy and materials. The authors of the paper see the opportunity to overcome the conflict between the techno- and bio-spheres in the area of mineral raw materials by means of changing the technological paradigm of integrated mineral development by implementing nature-like technologies oriented to the ideas and methods of converging resources of natural biota as the object of the environmental protection and geotechnologies as the major source of ecological hazards induced in the course of development of mineral resources of lithosphere. 1. Satellite Tidal Magnetic Signals Constrain Oceanic Lithosphere-Asthenosphere Boundary Earth Tomography with Tidal Magnetic Signals Science.gov (United States) Grayver, Alexander V.; Schnepf, Neesha R.; Kuvshinov, Alexey V.; Sabaka, Terence J.; Chandrasekharan, Manoj; Olsen, Niles 2016-01-01 The tidal flow of electrically conductive oceans through the geomagnetic field results in the generation of secondary magnetic signals, which provide information on the subsurface structure. Data from the new generation of satellites were shown to contain magnetic signals due to tidal flow; however, there are no reports that these signals have been used to infer subsurface structure. Here we use satellite-detected tidal magnetic fields to image the global electrical structure of the oceanic lithosphere and upper mantle down to a depth of about 250 km. The model derived from more than 12 years of satellite data reveals an Approximately 72 km thick upper resistive layer followed by a sharp increase in electrical conductivity likely associated with the lithosphere-asthenosphere boundary, which separates colder rigid oceanic plates from the ductile and hotter asthenosphere. 2. An Equivalent Source Method for Modelling the Global Lithospheric Magnetic Field DEFF Research Database (Denmark) Kother, Livia Kathleen; Hammer, Magnus Danel; Finlay, Chris 2014-01-01 We present a new technique for modelling the global lithospheric magnetic field at Earth's surface based on the estimation of equivalent potential field sources. As a demonstration we show an application to magnetic field measurements made by the CHAMP satellite during the period 2009-2010 when...... are also employed to minimize the influence of the ionospheric field. The model for the remaining lithospheric magnetic field consists of magnetic point sources (monopoles) arranged in an icosahedron grid. The corresponding source values are estimated using an iteratively reweighted least squares algorithm...... in the CHAOS-4 and MF7 models using more conventional spherical harmonic based approaches. Advantages of the equivalent source method include its local nature, allowing e.g. for regional grid refinement, and the ease of transforming to spherical harmonics when needed. Future applications will make use of Swarm... 3. Can We Probe the Conductivity of the Lithosphere and Upper Mantle Using Satellite Tidal Magnetic Signals? Science.gov (United States) Schnepf, N. R.; Kuvshinov, A.; Sabaka, T. 2015-01-01 A few studies convincingly demonstrated that the magnetic fields induced by the lunar semidiurnal (M2) ocean flow can be identified in satellite observations. This result encourages using M2 satellite magnetic data to constrain subsurface electrical conductivity in oceanic regions. Traditional satellite-based induction studies using signals of magnetospheric origin are mostly sensitive to conducting structures because of the inductive coupling between primary and induced sources. In contrast, galvanic coupling from the oceanic tidal signal allows for studying less conductive, shallower structures. We perform global 3-D electromagnetic numerical simulations to investigate the sensitivity of M2 signals to conductivity distributions at different depths. The results of our sensitivity analysis suggest it will be promising to use M2 oceanic signals detected at satellite altitude for probing lithospheric and upper mantle conductivity. Our simulations also suggest that M2 seafloor electric and magnetic field data may provide complementary details to better constrain lithospheric conductivity. 4. The contribution of the Precambrian continental lithosphere to global H2 production. Science.gov (United States) Lollar, Barbara Sherwood; Onstott, T C; Lacrampe-Couloume, G; Ballentine, C J 2014-12-18 Microbial ecosystems can be sustained by hydrogen gas (H2)-producing water-rock interactions in the Earth's subsurface and at deep ocean vents. Current estimates of global H2 production from the marine lithosphere by water-rock reactions (hydration) are in the range of 10(11) moles per year. Recent explorations of saline fracture waters in the Precambrian continental subsurface have identified environments as rich in H2 as hydrothermal vents and seafloor-spreading centres and have suggested a link between dissolved H2 and the radiolytic dissociation of water. However, extrapolation of a regional H2 flux based on the deep gold mines of the Witwatersrand basin in South Africa yields a contribution of the Precambrian lithosphere to global H2 production that was thought to be negligible (0.009 × 10(11) moles per year). Here we present a global compilation of published and new H2 concentration data obtained from Precambrian rocks and find that the H2 production potential of the Precambrian continental lithosphere has been underestimated. We suggest that this can be explained by a lack of consideration of additional H2-producing reactions, such as serpentinization, and the absence of appropriate scaling of H2 measurements from these environments to account for the fact that Precambrian crust represents over 70 per cent of global continental crust surface area. If H2 production via both radiolysis and hydration reactions is taken into account, our estimate of H2 production rates from the Precambrian continental lithosphere of 0.36-2.27 × 10(11) moles per year is comparable to estimates from marine systems. 5. Long memory of mantle lithosphere fabric — European LAB constrained from seismic anisotropy Czech Academy of Sciences Publication Activity Database 2010-01-01 Roč. 120, č. 1-2 (2010), s. 131-143 ISSN 0024-4937 R&D Projects: GA AV ČR IAA300120709; GA ČR GA205/07/1088 Institutional research plan: CEZ:AV0Z30120515 Keywords : lithosphere-asthenosphere boundary * fossil anisotropy * travel - time residuals Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 3.121, year: 2010 6. Asthenosphere versus lithosphere as possible sources for basaltic magmas erupted during formation of the Red Sea International Nuclear Information System (INIS) Altherr, R.; Henjes-Kunst, F.; Baumann, A. 1990-01-01 Representative basalts from the axial trough of the Red Sea and from volcanic fields of the Arabian Peninsula ranging in composition from N-type MORB to basanite and in age from Early Miocene to Recent show a limited variation in their isotopic compositions: 87 Sr/ 86 Sr = 0.70240-0.70361, 206 Pb/ 204 Pb = 18.040-19.634, 207 Pb/ 204 Pb = 15.496-15.666, 208 Pb/ 204 Pb = 37.808-39.710, 143 Nd/ 144 Nd = 0.513194-0.512670. There is a poorly constrained correlation between chemical composition and isotope ratios: with increasing alkalinity, Sr and Pb isotope ratios increase and the Nd isotope ratio tends to decrease. In Pb isotope variation diagrams most of the basalts plot significantly above the NHRLs, irrespective of tectonic setting, i.e. thickness of underlying crust and/or lithosphere. MORBs from the axial trough of the Red Sea have higher Pb isotope ratios for a given 87 Sr/ 86 Sr than MORBs from the Indian Ocean ridges, including the Carlsberg Ridge. It is therefore suggested that both spreading ridges tap different convective systems in the asthenosphere. The tectonic setting of the basalts is reflected in their Nd-Sr isotope characteristics. Basalts from areas where the continental lithosphere is drastically thinned or absent (i.e. Red Sea axial trough and coastal plain, Afar) plot along a reference line defined by N-type MORB and Tristan da Cunha. Basalts erupted in areas with Pan-African crust of normal thickness and moderately thinned lithospheric mantle (i.e. rift shoulder) are characterized by relative low 143 Nd/ 144 Nd ratios and plot below the reference line towards an EM I component which is also found in the subcontinental lithospheric mantle. These differences in the Nd-Sr isotopic compositions of the basalts are independent of bulk-rock chemistry and are therefore controlled by tectonic setting alone. (orig./WL) 7. Detailed Configuration of the Underthrusting Indian Lithosphere Beneath Western Tibet Revealed by Receiver Function Images Science.gov (United States) Xu, Qiang; Zhao, Junmeng; Yuan, Xiaohui; Liu, Hongbing; Pei, Shunping 2017-10-01 We analyze the teleseismic waveform data recorded by 42 temporary stations from the Y2 and ANTILOPE-1 arrays using the P and S receiver function techniques to investigate the lithospheric structure beneath western Tibet. The Moho is reliably identified as a prominent feature at depths of 55-82 km in the stacked traces and in depth migrated images. It has a concave shape and reaches the deepest location at about 80 km north of the Indus-Yarlung suture (IYS). An intracrustal discontinuity is observed at 55 km depth below the southern Lhasa terrane, which could represent the upper border of the eclogitized underthrusting Indian lower crust. Underthrusting of the Indian crust has been widely observed beneath the Lhasa terrane and correlates well with the Bouguer gravity low, suggesting that the gravity anomalies in the Lhasa terrane are induced by topography of the Moho. At 20 km depth, a midcrustal low-velocity zone (LVZ) is observed beneath the Tethyan Himalaya and southern Lhasa terrane, suggesting a layer of partial melts that decouples the thrust/fold deformation of the upper crust from the shortening and underthrusting in the lower crust. The Sp conversions at the lithosphere-asthenosphere boundary (LAB) can be recognized at depths of 130-200 km, showing that the Indian lithospheric mantle is underthrusting with a ramp-flat shape beneath southern Tibet and probably is detached from the lower crust immediately under the IYS. Our observations reconstruct the configuration of the underthrusting Indian lithosphere and indicate significant along strike variations. 8. Hf-Nd isotope decoupling in the oceanic lithosphere: constraints from spinel peridotites from Oahu, Hawaii Science.gov (United States) Bizimis, Michael; Sen, Gautam; Salters, Vincent J. M. 2004-01-01 We present a detailed geochemical investigation on the Hf, Nd and Sr isotope compositions and trace and major element contents of clinopyroxene mineral separates from spinel lherzolite xenoliths from the island of Oahu, Hawaii. These peridotites are believed to represent the depleted oceanic lithosphere beneath Oahu, which is a residue of a MORB-related melting event some 80-100 Ma ago at a mid-ocean ridge. Clinopyroxenes from peridotites from the Salt Lake Crater (SLC) show a large range of Hf isotopic compositions, from ɛHf=12.2 (similar to the Honolulu volcanics series) to extremely radiogenic, ɛHf=65, at nearly constant 143Nd/ 144Nd ratios ( ɛNd=7-8). None of these samples show any isotopic evidence for interaction with Koolau-type melts. A single xenolith from the Pali vent is the only sample with Hf and Nd isotopic compositions that falls within the MORB field. The Hf isotopes correlate positively with the degree of depletion in the clinopyroxene (e.g. increasing Mg#, Cr#, decreasing Ti and heavy REE contents), but also with increasing Zr and Hf depletions relative to the adjacent REE in a compatibility diagram. The Lu/Hf isotope systematics of the SLC clinopyroxenes define apparent ages of 500 Ma or older and these compositions cannot be explained by mixing between any type of Hawaiian melts and the depleted Pacific lithosphere. Metasomatism of an ancient (e.g. 1 Ga or older) depleted peridotite protolith can, in principle, explain these apparent ages and the Nd-Hf isotope decoupling, but requires that the most depleted samples were subject to the least amount of metasomatism. Alternatively, the combined isotope, trace and major element compositions of these clinopyroxenes are best described by metasomatism of the 80-100 Ma depleted oceanic lithosphere by melts products of extensive mantle-melt interaction between Honolulu Volcanics-type melts and the depleted lithosphere. 9. Gravity anomalies and flexure of the lithosphere at the Middle Amazon Basin, Brazil Science.gov (United States) Nunn, Jeffrey A.; Aires, Jose R. 1988-01-01 The Middle Amazon Basin is a large Paleozoic sedimentary basin on the Amazonian craton in South America. It contains up to 7 km of mainly shallow water sediments. A chain of Bouguer gravity highs of approximately +40 to +90 mGals transects the basin roughly coincident with the axis of maximum thickness of sediment. The gravity highs are flanked on either side by gravity lows of approximately -40 mGals. The observed gravity anomalies can be explained by a steeply sided zone of high density in the lower crust varying in width from 100 to 200 km wide. Within this region, the continental crust has been intruded/replaced by more dense material to more than half its original thickness of 45-50 km. The much wider sedimentary basin results from regional compensation of the subsurface load and the subsequent load of accumulated sediments by flexure of the lithosphere. The observed geometry of the basin is consistent with an elastic lithosphere model with a mechanical thickness of 15-20 km. Although this value is lower than expected for a stable cratonic region of Early Proterozoic age, it is within the accepted range of effective elastic thicknesses for the earth. Rapid subsidence during the late Paleozoic may be evidence of a second tectonic event or lithospheric relaxation which could lower the effective mechanical thickness of the lithosphere. The high-density zone in the lower crust, as delineated by gravity and flexural modeling, has a complex sinuous geometry which is narrow and south of the axis of maximum sediment thickness on the east and west margins and wide and offset to the north in the center of the basin. The linear trough geometry of the basin itself is a result of smoothing by regional compensation of the load in the lower crust. 10. Bottom to top lithosphere structure and evolution of western Eger Rift (Central Europe) Czech Academy of Sciences Publication Activity Database Babuška, Vladislav; Fiala, Jiří; Plomerová, Jaroslava 2010-01-01 Roč. 99, č. 4 (2010), s. 891-907 ISSN 1437-3254 R&D Projects: GA ČR GA205/07/1088; GA AV ČR IAA300120709 Institutional research plan: CEZ:AV0Z30120515; CEZ:AV0Z30130516 Keywords : western Bohemian Massif * Eger (Ohře) Rift * lithosphere structure and development * mantle seismic anisotropy Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 1.980, year: 2010 11. Mars: Lithospheric Flexure of the Tharsis Montes Volcanoes and the Evolutionary Relationship to Their Tectonic History Science.gov (United States) Chute, H.; Dombard, A. J.; Byrne, P. K. 2017-12-01 Lithospheric flexure associated with Arsia, Pavonis, and Ascraeus Montes has been previously studied to constrain the timeline and breadth of endogenic surface features surrounding these volcanoes. Here, we simulate the radial extent of two specific load-related features: annular graben and flank terraces. Detailed mapping of Ascraeus Mons (the youngest of the three volcanoes) showed a phase of compression of the edifice, forming the terraces and an annulus of graben immediately off the flanks, followed by a period of extension that formed additional graben superposed on the terraces on the lower flanks of the edifice. This transition from compression to extension on the lower flanks has been difficult to reconcile in mechanical models. We explore, with finite-element simulations, the effects of a thermal anomaly associated with an intrusive crustal underplate, which results in locally thinning the lithosphere (in contrast to past efforts that assumed a constant-thickness lithosphere). We find that it is primarily the horizontal extent of this thermal anomaly that governs how the lithosphere flexes under a volcano, as well as the transition from flank compression to a tight annulus of extensional stresses. Specifically, we propose that the structures on Ascraeus may be consistent with an early stage of volcanic growth accompanied by an underplate about the same width as the edifice that narrowed as volcanism waned, resulting in an inward migration of the extensional horizontal stresses from the surrounding plains onto the lower flanks. By linking the surface strains on the volcano with the volcano-tectonic evolution predicted by our flexure model, we can further constrain a more accurate timeline for the tectonic history of Ascraeus Mons. More broadly, because these tectonic structures are commonly observed, our results provide a general evolutionary model for large shield volcanoes on Mars. 12. Recycling of Oceanic Lithosphere: Water, fO2 and Fe-isotope Constraints Science.gov (United States) Bizmis, M.; Peslier, A. H.; McCammon, C. A.; Keshav, S.; Williams, H. M. 2014-01-01 Spinel peridotite and garnet pyroxenite xenoliths from Hawaii provide important clues about the composition of the oceanic lithosphere, and can be used to assess its contribution to mantle heterogeneity upon recycling. The peridotites have lower bulk H2O (approximately 70-114 ppm) than the MORB source, qualitatively consistent with melt depletion. The garnet pyroxenites (high pressure cumulates) have higher H2O (200-460 ppm, up to 550 ppm accounting for phlogopite) and low H2O/Ce ratios (less than 100). The peridotites have relatively light Fe-isotopes (delta Fe -57 = -0.34 to 0.13) that decrease with increasing depletion, while the pyroxenites are significantly heavier (delta Fe-57 up to 0.3). The observed xenolith, as well as MORB and OIB total Fe-isotope variability is larger that can be explained by existing melting models. The high H2O and low H2O/Ce ratios of pyroxenites are similar to estimates of EM-type OIB sources, while their heavy delta Fe-57 are similar to some Society and Cook-Austral basalts. Therefore, recycling of mineralogically enriched oceanic lithosphere (i.e. pyroxenites) may contribute to OIB sources and mantle heterogeneity. The Fe(3+)/Sigma? systematics of these xenoliths also suggest that there might be lateral redox gradients within the lithosphere, between juxtaposed oxidized spinel peridotites (deltaFMQ = -0.7 to 1.6, at 15 kb) and more reduced pyroxenites (deltaFMQ = -2 to -0.4, at 20-25kb). Such mineralogically and compositionally imposed fO2 gradients may generate local redox melting due to changes in fluid speciation (e.g. reduced fluids from pyroxenite encountering more oxidized peridotite). Formation of such incipient, small degree melts could further contribute to metasomatic features seen in peridotites, mantle heterogeneity, as well as the low velocity and high electrical conductivity structures near the base of the lithosphere and upper mantle. 13. Short wavelength lateral variability of lithospheric mantle beneath the Middle Atlas (Morocco) as recorded by mantle xenoliths Science.gov (United States) El Messbahi, Hicham; Bodinier, Jean-Louis; Vauchez, Alain; Dautria, Jean-Marie; Ouali, Houssa; Garrido, Carlos J. 2015-05-01 The Middle Atlas is a region where xenolith-bearing volcanism roughly coincides with the maximum of lithospheric thinning beneath continental Morocco. It is therefore a key area to study the mechanisms of lithospheric thinning and constrain the component of mantle buoyancy that is required to explain the Moroccan topography. Samples from the two main xenolith localities, the Bou Ibalghatene and Tafraoute maars, have been investigated for their mineralogy, microstructures, crystallographic preferred orientation, and whole-rock and mineral compositions. While Bou Ibalghatene belongs to the main Middle Atlas volcanic field, in the 'tabular' Middle Atlas, Tafraoute is situated about 45 km away, on the North Middle Atlas Fault that separates the 'folded' Middle Atlas, to the South-East, from the 'tabular' Middle Atlas, to the North-West. Both xenolith suites record infiltration of sub-lithospheric melts that are akin to the Middle Atlas volcanism but were differentiated to variable degrees as a result of interactions with lithospheric mantle. However, while the Bou Ibalghatene mantle was densely traversed by high melt fractions, mostly focused in melt conduits, the Tafraoute suite records heterogeneous infiltration of smaller melt fractions that migrated diffusively, by intergranular porous flow. As a consequence the lithospheric mantle beneath Bou Ibalghaten was strongly modified by melt-rock interactions in the Cenozoic whereas the Tafraoute mantle preserves the record of extensional lithospheric thinning, most likely related to Mesozoic rifting. The two xenolith suites illustrate distinct mechanisms of lithospheric thinning: extensional thinning in Tafraoute, where hydrous incongruent melting triggered by decompression probably played a key role in favouring strain localisation, vs. thermal erosion in Bou Ibalghatene, favoured and guided by a dense network of melt conduits. Our results lend support to the suggestion that lithospheric thinning beneath the Atlas 14. Electromagnetic study of lithospheric structure in the marginal zone of East European Craton in NW Poland Science.gov (United States) Jóźwiak, Waldemar 2013-10-01 The marginal zone of the East European Platform, an area of key importance for our understanding of the geotectonic history of Europe, has been a challenge for geophysicists for many years. The basic research method is seismic survey, but many important data on physical properties and structure of the lithosphere may also be provided by the electromagnetic methods. In this paper, results of deep basement study by electromagnetic methods performed in Poland since the mid-1960s are presented. Over this time, several hundred long-period soundings have been executed providing an assessment of the electric conductivity distribution in the crust and upper mantle. Numerous 1D, 2D, and pseudo-3D electric conductivity models were constructed, and a new interpretation method based on Horizontal Magnetic Tensor analysis has been applied recently. The results show that the contact zone is of lithospheric discontinuity character and there are distinct differences in geoelectric structures between the Precambrian Platform, transitional zone (TESZ), and the Paleozoic Platform. The wide-spread conducting complexes in the crust with integral conductivity values reaching 10 000 S at 20-30 km depths are most spectacular. They are most likely consequences of geological processes related to Caledonian and Variscan orogenesis. The upper mantle conductivity is also variable, the thickness of high-resistive lithospheric plates ranging from 120-140 km under the Paleozoic Platform to 220-240 km under the East European Platform. 15. Observatory geoelectric fields induced in a two-layer lithosphere during magnetic storms Science.gov (United States) Love, Jeffrey J.; Swidinsky, Andrei 2015-01-01 We report on the development and validation of an algorithm for estimating geoelectric fields induced in the lithosphere beneath an observatory during a magnetic storm. To accommodate induction in three-dimensional lithospheric electrical conductivity, we analyze a simple nine-parameter model: two horizontal layers, each with uniform electrical conductivity properties given by independent distortion tensors. With Laplace transformation of the induction equations into the complex frequency domain, we obtain a transfer function describing induction of observatory geoelectric fields having frequency-dependent polarization. Upon inverse transformation back to the time domain, the convolution of the corresponding impulse-response function with a geomagnetic time series yields an estimated geoelectric time series. We obtain an optimized set of conductivity parameters using 1-s resolution geomagnetic and geoelectric field data collected at the Kakioka, Japan, observatory for five different intense magnetic storms, including the October 2003 Halloween storm; our estimated geoelectric field accounts for 93% of that measured during the Halloween storm. This work demonstrates the need for detailed modeling of the Earth’s lithospheric conductivity structure and the utility of co-located geomagnetic and geoelectric monitoring. 16. A lithospheric magnetic field model derived from the Swarm satellite magnetic field measurements Science.gov (United States) Hulot, G.; Thebault, E.; Vigneron, P. 2015-12-01 The Swarm constellation of satellites was launched in November 2013 and has since then delivered high quality scalar and vector magnetic field measurements. A consortium of several research institutions was selected by the European Space Agency (ESA) to provide a number of scientific products which will be made available to the scientific community. Within this framework, specific tools were tailor-made to better extract the magnetic signal emanating from Earth's the lithospheric. These tools rely on the scalar gradient measured by the lower pair of Swarm satellites and rely on a regional modeling scheme that is more sensitive to small spatial scales and weak signals than the standard spherical harmonic modeling. In this presentation, we report on various activities related to data analysis and processing. We assess the efficiency of this dedicated chain for modeling the lithospheric magnetic field using more than one year of measurements, and finally discuss refinements that are continuously implemented in order to further improve the robustness and the spatial resolution of the lithospheric field model. 17. Isotopic characterisation of the sub-continental lithospheric mantle beneath Zealandia, a rifted fragment of Gondwana DEFF Research Database (Denmark) Waight, Tod Earle; Scott, James M.; van der Meer, Quinten Har Adriaan 2013-01-01 The greater New Zealand region, known as Zealandia, represents an amalgamation of crustal fragments accreted to the paleo-Pacific Gondwana margin and which underwent significant thinning during the subsequent split from Australia and Antarctica in the mid-Cretaceous following opening of the Tasma...... Sea and the Southern Ocean. We present Sr, Nd and Pb isotopes and laser ablation trace element data for a comprehensive suite of clinopyroxene separates from spinel peridotite xenoliths (lherzolite to harzburgite) from the sub-continental lithospheric mantle across southern New Zealand...... composition, age or geographical separation. These isotopic compositions indicate that the sub-continental lithospheric mantle under southern New Zealand has a regionally distinct and pervasive FOZO to HIMU – like signature. The isotopic signatures are also similar to those of the alkaline magmas...... that transported the xenoliths and suggest that most of the HIMU signature observed in the volcanics could be derived from a major source component in the sub-continental lithospheric mantle. Trace element abundances in clinopyroxene are highly heterogeneous and vary from LREE-enriched, relatively flat and MORB... 18. Tectonically asymmetric Earth: From net rotation to polarized westward drift of the lithosphere Directory of Open Access Journals (Sweden) Carlo Doglioni 2015-05-01 Full Text Available The possibility of a net rotation of the lithosphere with respect to the mantle is generally overlooked since it depends on the adopted mantle reference frames, which are arbitrary. We review the geological and geophysical signatures of plate boundaries, and show that they are markedly asymmetric worldwide. Then we compare available reference frames of plate motions relative to the mantle and discuss which is at best able to fit global tectonic data. Different assumptions about the depths of hotspot sources (below or within the asthenosphere, which decouples the lithosphere from the deep mantle predict different rates of net rotation of the lithosphere relative to the mantle. The widely used no-net-rotation (NNR reference frame, and low (1°/Ma net rotation (shallow hotspots source, all plates, albeit at different velocity, move westerly along a curved trajectory, with a tectonic equator tilted about 30° relative to the geographic equator. This is consistent with the observed global tectonic asymmetries. 19. Insights into the lithospheric architecture of Iberia and Morocco from teleseismic body-wave attenuation Science.gov (United States) 2017-11-01 The long and often complicated tectonic history of continental lithosphere results in lateral strength heterogeneities which in turn affect the style and localization of deformation. In this study, we produce a model for the attenuation structure of Iberia and northern Morocco using a waveform-matching approach on P-wave data from teleseismic deep-focus earthquakes. We find that attenuation is correlated with zones of intraplate deformation and seismicity, but do not find a consistent relationship between attenuation and recent volcanism. The main features of our model are low to moderate Δt* in the undeformed Tertiary basins of Spain and high Δt* in areas deformed by the Alpine orogeny. Additionally, low Δt* is found in areas where the Alboran slab is thought to be attached to the Iberian and African lithosphere, and high Δt* where it has detached. These features are robust with respect to inversion parameters, and are consistent with independent data. Very mild backazimuthal dependence of the measurements and comparison with previous results suggest that the source of the attenuation is sub-crustal. In line with other recent studies, the range of Δt* we observe is much larger than can be expected from lithospheric thickness or temperature variations. 20. Global Models of Ridge-Push Force, Geoid, and Lithospheric Strength of Oceanic plates Science.gov (United States) Mahatsente, Rezene 2017-12-01 An understanding of the transmission of ridge-push related stresses in the interior of oceanic plates is important because ridge-push force is one of the principal forces driving plate motion. Here, I assess the transmission of ridge-push related stresses in oceanic plates by comparing the magnitude of the ridge-push force to the integrated strength of oceanic plates. The strength is determined based on plate cooling and rheological models. The strength analysis includes low-temperature plasticity (LTP) in the upper mantle and assumes a range of possible tectonic conditions and rheology in the plates. The ridge-push force has been derived from the thermal state of oceanic lithosphere, seafloor depth and crustal age data. The results of modeling show that the transmission of ridge-push related stresses in oceanic plates mainly depends on rheology and predominant tectonic conditions. If a lithosphere has dry rheology, the estimated strength is higher than the ridge-push force at all ages for compressional tectonics and at old ages (>75 Ma) for extension. Therefore, under such conditions, oceanic plates may not respond to ridge-push force by intraplate deformation. Instead, the plates may transmit the ridge-push related stress in their interior. For a wet rheology, however, the strength of young lithosphere (stress may dissipate in the interior of oceanic plates and diffuses by intraplate deformation. The state of stress within a plate depends on the balance of far-field and intraplate forces. 1. Contrast of lithospheric dynamics across the southern and eastern margins of the Tibetan Plateau: a numerical study Science.gov (United States) Sun, Yujun; Fan, Taoyuan; Wu, Zhonghai 2018-05-01 Both of the southern and eastern margins of the Tibetan Plateau are bounded by the cratonic blocks (Indian plate and Sichuan basin). However, there are many differences in tectonic deformation, lithospheric structure and surface heat flow between these two margins. What dynamics cause these differences? With the constraints of the lithospheric structure and surface heat flow across the southern and eastern margins of Tibetan Plateau, we constructed 2-D thermal-mechanical finite-element models to investigate the dynamics across these two margins. The results show that the delamination of mantle lithosphere beneath the Lhasa terrane in Oligocene and the rheological contrast between the Indian and Tibetan crust are the two main factors that control the subduction of the Indian plate. The dynamics across the eastern margin of the Tibetan Plateau are different from the southern margin. During the lateral expansion of the Tibetan Plateau, pure shear thickening is the main deformation characteristic for the Songpan-Ganzi lithosphere. This thickening results in the reduction of geothermal gradient and surface heat flow. From this study, it can be seen that the delamination of the mantle lithosphere and the rheological contrast between the Tibetan Plateau and its bounding blocks are the two main factors that control the lithospheric deformation and surface heat flow. 2. Using seismology to raise science awareness in kindergarten and elementary levels, with the help of high school students Science.gov (United States) Rocha, F. L.; Silveira, G. M.; Moreira, G.; Afonso, I. P.; Maciel, B. A. P. C.; Melo, M. O.; Neto, R. P.; Gonçalves, M.; Marques, G.; Hartmann, R. P. 2014-12-01 Teaching students, aged from 4 up to 18 years old, is a challenging task. It continuously implies new strategies and new subjects adapted to all of them. This is even more evident, when we have to teach natural-hazards scientific aspects and safe attitudes toward risk. We often see that most of the high-school students (16 -18 years old) are not motivated for extra-curricular activities implying science and/or behaviours changes. But, they have a very positive response when we give them some responsibility. On top of that, we also realised that young children are quite receptive to the involvement of older students in the school environment Taking this into consideration, our project use the k12 students to prepare scientific activities and subjects, based in questions, which they need to answer themselves. The students need to answer those questions and, only then, adapt and teach the right answers to the different school-levels. With this approach, we challenged the students to solve three questions: How to use a SEP seismometer at school, and its data? How to set up a shaking table? How to introduce waves and vibrations contents to all ages of students? During the project they developed many science skills, and worked in straight cooperation with teachers, the parents association and the seismology research group at Instituto Dom Luíz. As a result, it was possible to reach all school students with the help of the k-12 ones. This is an outcome of the project W-Shake, a Parents-in-Science Initiative to promote the study of seismology and related subjects. This project, supported by the Portuguese "Ciência Viva" program, results from a direct cooperation between the parents association, science school-teachers and the seismology research group at Instituto Dom Luíz. 3. Lithosphere destabilization by melt percolation during pre-oceanic rifting: Evidence from Alpine-Apennine ophiolitic peridotites Science.gov (United States) Piccardo, Giovanni; Ranalli, Giorgio 2017-04-01 Orogenic peridotites from Alpine-Apennine ophiolite Massifs (Lanzo, Voltri, External and Internal Ligurides, - NW Italy, and Mt. Maggiore - Corsica) derive from the mantle lithosphere of the Ligurian Tethys. Field/structural and petrologic/geochemical studies provide constraints on the evolution of the lithospheric mantle during pre-oceanic passive rifting of the late Jurassic Ligurian Tethys ocean. Continental rifting by far-field tectonic forces induced extension of the lithosphere by means of km-scale extensional shear zones that developed before infiltration of melts from the asthenosphere (Piccardo and Vissers, 2007). After significant thinning of the lithosphere, the passively upwelling asthenosphere underwent spinel-facies decompression melting along the axial zone of the extensional system. Silica-undersaturated melt fractions percolated through the lithospheric mantle via diffuse/focused porous flow and interacted with the host peridotite through pyroxenes-dissolving/olivine-precipitating melt/rock reactions. Pyroxene dissolution and olivine precipitation modified the composition of the primary silica-undersaturated melts into derivative silica-saturated melts, while the host lithospheric spinel lherzolites were transformed into pyroxene-depleted/olivine-enriched reactive spinel harzburgites and dunites. The derivative liquids interacted through olivine-dissolving/orthopyroxene+plagioclase-crystallizing reactions with the host peridotites that were impregnated and refertilized (Piccardo et al., 2015). The saturated melts stagnated and crystallized in the shallow mantle lithosphere (as testified by diffuse interstitial crystallization of euhedral orthopyroxene and anhedral plagioclase) and locally ponded, forming orthopyroxene-rich/olivine-free gabbro-norite pods (Piccardo and Guarnieri, 2011). Reactive and impregnated peridotites are characterized by high equilibration temperatures (up to 1250 °C) even at low pressure, plagioclase-peridotite facies 4. Eagle Pass Jr. High Seismology Team: Strategies for Engaging Middle School "At-Risk" Students in Authentic Research Science.gov (United States) Brunt, M. R.; Ellins, K. K.; Frohlich, C. A. 2011-12-01 In 2008, during my participation in the NSF-sponsored Texas Earth & Space Science (TXESS) Revolution professional development program, I was awarded an AS-1 seismograph through IRIS's Seismographs in Schools Program. This program serves to create an international educational seismic network that allows teachers across the country and around the world to share seismic data in real-time using online tools, classroom activities, and technical support documents for seismic instruments. Soon after receiving my AS-1, I founded and began sponsoring the Eagle Pass Jr. High Seismology Team which consists of selected 7th and 8th grade students. Eagle Pass Jr. High is a Title 1 school that serves a predominantly "at-risk" Hispanic population. We meet after school once a week to learn about earthquakes, seismic waves, analyze recorded seismic event data using computer software programming, and correspond with other students from schools around the country. This team approach has been well received by fellow TXESS Revolution teachers with AS-1 seismographs and will be implemented by David Boyd, STEM coordinator for Williams Preparatory Academy in Dallas, Texas this fall 2011. All earthquakes recorded by our seismograph station (EPTX), which has remained online and actively recording seismic data since 2008, are catalogued and then plotted on a large world map displayed on my classroom wall. A real-time seismogram image updates every five minutes and along with all earthquakes recorded since installation can be viewed on our webpage http://www.iris.edu/hq/ssn/schools/view/eptx. During the 2010-2011 school year, my seismology team and I participated in an earthquake research study led by Dr. Cliff Frohlich at the Institute for Geophysics. The study examined seismograms and felt reports for the 25 April 2010 Alice, Texas, earthquake, in order to investigate its possible connection to oil and gas production in the Stratton oil and gas field. A research paper detailing our findings 5. Garnet Pyroxenites from Kaula, Hawaii: Implications for Plume-Lithosphere Interaction Science.gov (United States) Bizimis, M.; Garcia, M. O.; Norman, M. D. 2006-12-01 The presence of garnet pyroxenite xenoliths on Oahu and Kaula Islands, Hawaii, provides the rare opportunity to investigate the composition of the deeper oceanic mantle lithosphere and the nature of plume-lithosphere interaction in two dimensions, downstream from the center of the Hawaiian plume. Kaula (60 miles SW of Kauai) is on the same bathymetric shallow as Kauai and the Kaula-Niihau-Kauai islands form a cross-trend relationship to the Hawaiian Island ridge. Here, we present the first Sr-Nd isotope data on clinopyroxenes (cpx) from Kaula pyroxenites, and we compare them with the Salt Lake Crater (SLC) pyroxenites from Oahu. The Kaula cpx major element compositions overlap those of the (more variable) SLC pyroxenites (e.g. Mg# = 0.79-0.83), except for their higher Al2O3 contents (9% vs. 5-8%) than the SLC. The Kaula cpx are LREE enriched with elevated Dy/Yb ratios, similar to the SLC pyroxenites and characteristic of the presence of garnet that preferentially incorporates the HREE. In Sr-Nd isotope space, the Kaula pyroxenite compositions (87Sr/86Sr= 0.70312-0.70326, ɛNd= 7.2-8.6) overlap those of both the Oahu-Kauai post erosional lavas and the SLC pyroxenites, falling at the isotopically depleted end of the Hawaiian lava compositions. The depleted Sr-Nd isotope compositions of the Kaula pyroxenites suggest that they are not related to the isotopically enriched shield stage Hawaiian lavas, either as a source material (i.e. recycled eclogite) or as cumulates. Their elevated 87Sr/86Sr ratios relative to MORB also suggests that they are not likely MORB-related cumulates. The similarities between the Oahu and Kaula pyroxenites, some 200 km apart, suggest the widespread presence of pyroxenitic material in the deeper (>60km) Pacific lithosphere between Oahu and Kaula-Kauai, as high pressure cumulates from melts isotopically similar to the secondary Hawaiian volcanism. The presence of this material within the lower lithosphere is consistent with seismic observations 6. Deep magmatism alters and erodes lithosphere and facilitates decoupling of Rwenzori crustal block Science.gov (United States) Wallner, Herbert; Schmeling, Harro 2013-04-01 The title is the answer to the initiating question "Why are the Rwenzori Mountains so high?" posed at the EGU 2008. Our motivation origins in the extreme topography of the Rwenzori Mountains. The strong, cold proterozoic crustal horst is situated between rift segments of the western branch of the East African Rift System. Ideas of rift induced delamination (RID) and melt induced weakening (MIW) have been tested with one- and two-phase flow physics. Numerical model parameter variations and new observations lead to a favoured model with simple and plausible definitions. Results coincide in the scope of their comparability with different observations or vice versa reduce ambiguity and uncertainties in model input. Principle laws of the thermo-mechanical physics are the equations of conservation of mass, momentum, energy and composition for a two-phase (matrix-melt) system with nonlinear rheology. A simple solid solution model determines melting and solidification under consideration of depletion and enrichment. The Finite Difference Method with markers is applied to visco-plastic flow using the streamfunction in an Eulerian formulation in 2D. The Compaction Boussinesq and the high Prandtl number Approximation are employed. Lateral kinematic boundary conditions provide long-wavelength asthenospheric upwelling and extensional stress conditions. Partial melts are generated in the asthenosphere, extracted above a critical fraction, and emplaced into a given intrusion level. Temperature anomalies positioned beneath the future rifts, the sole specialization to the Rwenzori situation, localize melts which are very effective in weakening the lithosphere. Convection patterns tend to generate dripping instabilities at the lithospheric base; multiple slabs detach and distort uprising asthenosphere; plumes migrate, join and split. In spite of appearing chaotic flow behaviour a characteristic recurrence time of high velocity events (drips, plumes) emerges. Chimneys of increased 7. Seismic evidence of the lithosphere-asthenosphere boundary beneath Izu-Bonin area Science.gov (United States) Cui, H.; Gao, Y.; Zhou, Y. 2016-12-01 The lithosphere-asthenosphere boundary (LAB), separating the rigid lithosphere and the ductile asthenosphere layers, is the seismic discontinuity with the negative velocity contrast of the Earth's interior [Fischer et al., 2010]. The LAB has been also termed the Gutenberg (G) discontinuity that defines the top of the low velocity zone in the upper mantle [Gutenberg, 1959; Revenaugh and Jordan, 1991]. The seismic velocity, viscosity, resistivity and other physical parameters change rapidly with the depths across the boundary [Eaton et al., 2009]. Seismic detections on the LAB in subduction zone regions are of great help to understand the interactions between the lithosphere and asthenosphere layers and the geodynamic processes related with the slab subductions. In this study, the vertical broadband waveforms are collected from three deep earthquake events occurring from 2000 to 2014 with the focal depths of 400 600 km beneath the Izu-Bonin area. The waveform data is processed with the linear slant stack method [Zang and Zhou, 2002] to obtain the vespagrams in the relative travel-time to slowness domain and the stacked waveforms. The sP precursors reflected on the LAB (sLABP), which have the negative polarities with the amplitude ratios of 0.17 0.21 relative to the sP phases, are successfully extracted. Based on the one-dimensional modified velocity model (IASP91-IB), we obtain the distributions for six reflected points of the sLABP phases near the source region. Our results reveal that the LAB depths range between 58 and 65 km beneath the Izu-Bonin Arc, with the average depth of 62 km and the small topography of 7 km. Compared with the results of the tectonic stable areas in Philippine Sea [Kawakatsu et al., 2009; Kumar and Kawakatsu, 2011], the oceanic lithosphere beneath the Izu-Bonin Arc shows the obvious thinning phenomena. We infer that the lithospheric thinning is closely related with the partial melting, which is caused by the volatiles continuously released 8. The Lithosphere-asthenosphere Boundary beneath the South Island of New Zealand Science.gov (United States) Hua, J.; Fischer, K. M.; Savage, M. K. 2017-12-01 Lithosphere-asthenosphere boundary (LAB) properties beneath the South Island of New Zealand have been imaged by Sp receiver function common-conversion point stacking. In this transpressional boundary between the Australian and Pacific plates, dextral offset on the Alpine fault and convergence have occurred for the past 20 My, with the Alpine fault now bounded by Australian plate subduction to the south and Pacific plate subduction to the north. This study takes advantage of the long-duration and high-density seismometer networks deployed on or near the South Island, especially 29 broadband stations of the New Zealand permanent seismic network (GeoNet). We obtained 24,980 individual receiver functions by extended-time multi-taper deconvolution, mapping to three-dimensional space using a Fresnel zone approximation. Pervasive strong positive Sp phases are observed in the LAB depth range indicated by surface wave tomography (Ball et al., 2015) and geochemical studies. These phases are interpreted as conversions from a velocity decrease across the LAB. In the central South Island, the LAB is observed to be deeper and broader to the west of the Alpine fault. The deeper LAB to the west of the Alpine fault is consistent with oceanic lithosphere attached to the Australian plate that was partially subducted while also translating parallel to the Alpine fault (e.g. Sutherland, 2000). However, models in which the Pacific lithosphere has been underthrust to the west past the Alpine fault cannot be ruled out. Further north, a zone of thin lithosphere with a strong and vertically localized LAB velocity gradient occurs to the west of the fault, juxtaposed against a region of anomalously weak LAB conversions to the east of the fault. This structure, similar to results of Sp imaging beneath the central segment of the San Andreas fault (Ford et al., 2014), also suggests that lithospheric blocks with contrasting LAB properties meet beneath the Alpine fault. The observed variations in 9. Craton stability and continental lithosphere dynamics during plume-plate interaction Science.gov (United States) Wang, H.; Van Hunen, J.; Pearson, D. 2013-12-01 Survival of thick cratonic roots in a vigorously convecting mantle system for billions of years has long been studied by the geodynamical community. A high cratonic root strength is generally considered to be the most important factor. We first perform and discuss new numerical models to investigate craton stability in both Newtonian and non-Newtonian rheology in the stagnant lid regime. The results show that only a modest compositional rheological factor of Δη=10 with non-Newtonian rheology is required for the survival of cratonic roots in a stagnant lid regime. A larger rheological factor (100 or more) is needed to maintain similar craton longevity in a Newtonian rheology environment. Furthermore, chemical buoyancy plays an important role on craton stability and its evolution, but could only work with suitable compositional rheology. During their long lifespan, cratons experienced a suite of dynamic, tectonothermal events, such as nearby subduction and mantle plume activity. Cratonic nuclei are embedded in shorter-lived, more vulnerable continental areas of different thickness, composition and rheology, which would influence the lithosphere dynamic when tectonothermal events happen nearby. South Africa provides a very good example to investigate such dynamic processes as it hosts several cratons and there are many episodic thermal events since the Mesozoic as indicated by a spectrum of magmatic activity. We numerically investigate such an integrated system using the topographic evolution of cratons and surrounding lithosphere as a diagnostic observable. The post-70Ma thinning of pericratonic lithosphere by ~50km around Kaapvaal craton (Mather et al., 2011) is also investigated through our numerical models. The results show that the pericratonic lithosphere cools and grows faster than cratons do, but is also more likely to be effected by episodic thermal events. This leads to surface topography change that is significantly larger around the craton than within 10. 3D Thermo-Mechanical Models of Plume-Lithosphere Interactions: Implications for the Kenya rift Science.gov (United States) Scheck-Wenderoth, M.; Koptev, A.; Sippel, J. 2017-12-01 We present three-dimensional (3D) thermo-mechanical models aiming to explore the interaction of an active mantle plume with heterogeneous pre-stressed lithosphere in the Kenya rift region. As shown by the recent data-driven 3D gravity and thermal modeling (Sippel et al., 2017), the integrated strength of the lithosphere for the region of Kenya and northern Tanzania appears to be strongly controlled by the complex inherited crustal structure, which may have been decisive for the onset, localization and propagation of rifting. In order to test this hypothesis, we have performed a series of ultra-high resolution 3D numerical experiments that include a coupled mantle/lithosphere system in a dynamically and rheologically consistent framework. In contrast to our previous studies assuming a simple and quasi-symmetrical initial condition (Koptev et al., 2015, 2016, 2017), the complex 3D distribution of rock physical properties inferred from geological and geophysical observations (Sippel et al., 2017) has been incorporated into the model setup that comprises a stratified three-layer continental lithosphere composed of an upper and lower crust and lithospheric mantle overlaying the upper mantle. Following the evidence of the presence of a broad low-velocity seismic anomaly under the central parts of the East African Rift system (e.g. Nyblade et al, 2000; Chang et al., 2015), a 200-km radius mantle plume has been seeded at the bottom of a 635 km-depth model box representing a thermal anomaly of 300°C temperature excess. In all model runs, results show that the spatial distribution of surface deformation is indeed strongly controlled by crustal structure: within the southern part of the model box, a localized narrow zone stretched in NS direction (i.e. perpendicularly to applied far-field extension) is aligned along a structural boundary within the lower crust, whereas in the northern part of the model domain, deformation is more diffused and its eastern limit coincides with 11. ASDF: A New Adaptable Data Format for Seismology Suitable for Large-Scale Workflows Science.gov (United States) Krischer, L.; Smith, J. A.; Spinuso, A.; Tromp, J. 2014-12-01 Increases in the amounts of available data as well as computational power opens the possibility to tackle ever larger and more complex problems. This comes with a slew of new problems, two of which are the need for a more efficient use of available resources and a sensible organization and storage of the data. Both need to be satisfied in order to properly scale a problem and both are frequent bottlenecks in large seismic inversions using ambient noise or more traditional techniques.We present recent developments and ideas regarding a new data format, named ASDF (Adaptable Seismic Data Format), for all branches of seismology aiding with the aforementioned problems. The key idea is to store all information necessary to fully understand a set of data in a single file. This enables the construction of self-explaining and exchangeable data sets facilitating collaboration on large-scale problems. We incorporate the existing metadata standards FDSN StationXML and QuakeML together with waveform and auxiliary data into a common container based on the HDF5 standard. A further critical component of the format is the storage of provenance information as an extension of W3C PROV, meaning information about the history of the data, assisting with the general problem of reproducibility.Applications of the proposed new format are numerous. In the context of seismic tomography it enables the full description and storage of synthetic waveforms including information about the used model, the solver, the parameters, and other variables that influenced the final waveforms. Furthermore, intermediate products like adjoint sources, cross correlations, and receiver functions can be described and most importantly exchanged with others.Usability and tool support is crucial for any new format to gain acceptance and we additionally present a fully functional implementation of this format based on Python and ObsPy. It offers a convenient way to discover and analyze data sets as well as making 12. Applications of seismic spatial wavefield gradient and rotation data in exploration seismology Science.gov (United States) Schmelzbach, C.; Van Renterghem, C.; Sollberger, D.; Häusler, M.; Robertsson, J. O. A. 2017-12-01 Seismic spatial wavefield gradient and rotation data have the potential to open up new ways to address long-standing problems in land-seismic exploration such as identifying and separating P-, S-, and surface waves. Gradient-based acquisition and processing techniques could enable replacing large arrays of densely spaced receivers by sparse spatially-compact receiver layouts or even one single multicomponent station with dedicated instruments (e.g., rotational seismometers). Such approaches to maximize the information content of single-station recordings are also of significant interest for seismic measurements at sites with limited access such as boreholes, the sea bottom, and extraterrestrial seismology. Arrays of conventional three-component (3C) geophones enable measuring not only the particle velocity in three dimensions but also estimating their spatial gradients. Because the free-surface condition allows to express vertical derivatives in terms of horizontal derivatives, the full gradient tensor and, hence, curl and divergence of the wavefield can be computed. In total, three particle velocity components, three rotational components, and divergence, result seven-component (7C) seismic data. Combined particle velocity and gradient data can be used to isolate the incident P- or S-waves at the land surface or the sea bottom using filtering techniques based on the elastodynamic representation theorem. Alternatively, as only S-waves exhibit rotational motion, rotational measurements can directly be used to identify S-waves. We discuss the derivations of the gradient-based filters as well as their application to synthetic and field data, demonstrating that rotational data can be of particular interest to S-wave reflection and P-to-S-wave conversion imaging. The concept of array-derived gradient estimation can be extended to source arrays as well. Therefore, source arrays allow us to emulate rotational (curl) and dilatational (divergence) sources. Combined with 7C 13. Engaging High School Science Teachers in Field-Based Seismology Research: Opportunities and Challenges Science.gov (United States) Long, M. D. 2015-12-01 Research experiences for secondary school science teachers have been shown to improve their students' test scores, and there is a substantial body of literature about the effectiveness of RET (Research Experience for Teachers) or SWEPT (Scientific Work Experience Programs for Teachers) programs. RET programs enjoy substantial support, and several opportunities for science teachers to engage in research currently exist. However, there are barriers to teacher participation in research projects; for example, laboratory-based projects can be time consuming and require extensive training before a participant can meaningfully engage in scientific inquiry. Field-based projects can be an effective avenue for involving teachers in research; at its best, earth science field work is a fun, highly immersive experience that meaningfully contributes to scientific research projects, and can provide a payoff that is out of proportion to a relatively small time commitment. In particular, broadband seismology deployments provide an excellent opportunity to provide teachers with field-based research experience. Such deployments are labor-intensive and require large teams, with field tasks that vary from digging holes and pouring concrete to constructing and configuring electronics systems and leveling and orienting seismometers. A recently established pilot program, known as FEST (Field Experiences for Science Teachers) is experimenting with providing one week of summer field experience for high school earth science teachers in Connecticut. Here I report on results and challenges from the first year of the program, which is funded by the NSF-CAREER program and is being run in conjunction with a temporary deployment of 15 seismometers in Connecticut, known as SEISConn (Seismic Experiment for Imaging Structure beneath Connecticut). A small group of teachers participated in a week of field work in August 2015 to deploy seismometers in northern CT; this experience followed a visit of the 14. Real Time Data for Seismology at the IRIS Data Management Center, AN Nsf-Sponsored Facility Science.gov (United States) Benson, R. B.; Ahern, T. K.; Trabant, C.; Weertman, B. R.; Casey, R.; Stromme, S.; Karstens, R. 2012-12-01 15. AlpArray - technical strategies for large-scale European co-operation in broadband seismology Science.gov (United States) Brisbourne, A.; Clinton, J.; Hetenyi, G.; Pequegnat, C.; Wilde-Piorko, M.; Villasenor, A.; Comelli, P.; AlpArray Working Group 2012-04-01 AlpArray is a new initiative to study the greater Alpine area with a large-scale broadband seismological network. The interested parties (currently 32 institutes in 12 countries) plan to combine their existing infrastructures into an all-out transnational effort that includes data acquisition, processing, imaging and interpretation. The experiment will encompass the greater Alpine area, from the Black Forest in the north to the Northern Apennines in the south and from the Pannonian Basin in the east to the French Massif Central in the west. We aim to cover this region with high-quality broadband seismometers by combining the ~400 existing permanent stations with an additional 400+ instruments from mobile pools. In this way, we plan to achieve homogeneous and high resolution coverage while also deploying densely spaced stations along swaths across key parts of the Alpine chain. These efforts on land will be combined with deployments of ocean bottom seismometers in the Mediterranean Sea. Significant progress has already been made in outlining the scientific goals and funding strategy. A brief overview of these aspects of the initiative will be presented here. However, we will concentrate on the technical aspects: How efficient large-scale integration of existing infrastructures can be achieved. Existing permanent station coverage within the greater Alpine area has been collated and assessed for data availability, allowing strategies to be developed for network densification to ensure a robust backbone network: An anticipated deployment strategy has been drawn up to optimise array coverage and data quality. The augmented backbone network will be supplemented by more densely spaced temporary arrays targeting more specific scientific questions. For these temporary arrays, a strategy document has been produced to outline standards for station installation, data acquisition, processing, archival and dissemination. All these operations are of course vital. However, data 16. Significant breakthroughs in monitoring networks of the volcanological and seismological French observatories Science.gov (United States) lemarchand, A.; Francois, B.; Bouin, M.; Brenguier, F.; Clouard, V.; Di Muro, A.; Ferrazzini, V.; Shapiro, N.; Staudacher, T.; Kowalski, P.; Agrinier, P. 2013-12-01 17. Continuous catchment-scale monitoring of geomorphic processes with a 2-D seismological array Science.gov (United States) Burtin, A.; Hovius, N.; Milodowski, D.; Chen, Y.-G.; Wu, Y.-M.; Lin, C.-W.; Chen, H. 2012-04-01 highlights the major interest of a seismic monitoring since it allows a detailed spatial and temporal survey of events that classic approaches are not able to observe. In the future, dense two dimensional seismological arrays will assess in real-time the landscape dynamics of an entire catchment, tracking sediments from slopes to rivers. 18. Lithospheric Expressions of the Precambrian Shield, Mesozoic Rifting, and Cenozoic Subduction and Mountain Building in Venezuela Science.gov (United States) Levander, A.; Masy, J.; Niu, F. 2013-05-01 The Caribbean (CAR)-South American (SA) plate boundary in Venezuela is a broad zone of faulting and diffuse deformation. GPS measurements show the CAR moving approximately 2 cm/yr relative to SA, parallel to the strike slip fault system in the east, with more oblique convergence in the west (Weber et al., 2001) causing the southern edge of the Caribbean to subduct beneath northwestern South America. The west is further complicated by the motion of the triangular Maracaibo block, which is escaping northeastward relative to SA along the Bocono and Santa Marta Faults. In central and eastern Venezuela, plate motion is accommodated by transpression and transtension along the right lateral San Sebastian- El Pilar strike-slip fault system. The strike-slip system marks the northern edge of coastal thrust belts and their associated foreland basins. The Archean-Proterozoic Guayana Shield, part of the Amazonian Craton, underlies southeastern and south-central Venezuela. We used the 87 station Venezuela-U.S. BOLIVAR array (Levander et al., 2006) to investigate lithospheric structure in northern South America. We combined finite-frequency Rayleigh wave tomography with Ps and Sp receiver functions to determine lithosphere-asthenosphere boundary (LAB) depth. We measured Rayleigh phase velocities from 45 earthquakes in the period band 20-100s. The phase velocities were inverted for 1D shear velocity structure on a 0.5 by 0.5 degree grid. Crustal thickness for the starting model was determined from active seismic experiments and receiver function analysis. The resulting 3D shear velocity model was then used to determine the depth of the LAB, and to CCP stack Ps and Sp receiver functions from ~45 earthquakes. The receiver functions were calculated in several frequency bands using iterative deconvolution and inverse filtering. Lithospheric thickness varies by more a factor of 2.5 across Venezuela. We can divide the lithosphere into several distinct provinces, with LAB depth 19. How does continental lithosphere break-apart? A 3D seismic view on the transition from magma-poor rifted margin to magmatic oceanic lithosphere Science.gov (United States) Emmanuel, M.; Lescanne, M.; Picazo, S.; Tomasi, S. 2017-12-01 In the last decade, high-quality seismic data and drilling results drastically challenged our ideas about how continents break apart. New models address their observed variability and are presently redefining basics of rifting as well as exploration potential along deepwater rifted margins. Seafloor spreading is even more constrained by decades of scientific exploration along Mid Oceanic Ridges. By contrast, the transition between rifting and drifting remains a debated subject. This lithospheric breakup "event" is geologically recorded along Ocean-Continent Transitions (OCT) at the most distal part of margins before indubitable oceanic crust. Often lying along ultra-deepwater margin domains and buried beneath a thick sedimentary pile, high-quality images of these domains are rare but mandatory to get strong insights on the processes responsible for lithospheric break up and what are the consequences for the overlying basins. We intend to answer these questions by studying a world-class 3D seismic survey in a segment of a rifted margin exposed in the Atlantic. Through these data, we can show in details the OCT architecture between a magma-poor hyper-extended margin (with exhumed mantle) and a classical layered oceanic crust. It is characterized by 1- the development of out-of-sequence detachment systems with a landward-dipping geometry and 2- the increasing magmatic additions oceanwards (intrusives and extrusives). Geometry of these faults suggests that they may be decoupled at a mantle brittle-ductile interface what may be an indicator on thermicity. Furthermore, magmatism increases as deformation migrates to the future first indubitable oceanic crust what controls a progressive magmatic crustal thickening below, above and across a tapering rest of margin. As the magmatic budget increases oceanwards, full-rate divergence is less and less accommodated by faulting. Magmatic-sedimentary architectures of OCT is therefore changing from supra-detachment to magmatic 20. A numerical model of mantle convection with deformable, mobile continental lithosphere within three-dimensional spherical geometry Science.gov (United States) Yoshida, M. 2010-12-01 A new numerical simulation model of mantle convection with a compositionally and rheologically heterogeneous, deformable, mobile continental lithosphere is presented for the first time by using three-dimensional regional spherical-shell geometry (Yoshida, 2010, Earth Planet. Sci. Lett.). The numerical results revealed that one of major factor that realizes the supercontinental breakup and subsequent continental drift is a pre-existing, weak (low-viscosity) continental margin (WCM) in the supercontinent. Characteristic tectonic structures such as young orogenic belts and suture zones in a continent are expected to be mechanically weaker than the stable part of the continental lithosphere with the cratonic root (or cratonic lithosphere) and yield lateral viscosity variations in the continental lithosphere. In the present-day Earth's lithosphere, the pre-existing, mechanically weak zones emerge as a diffuse plate boundary. However, the dynamic role of the WCM in the stability of continental lithosphere has not been understood in terms of geophysics. In my numerical model, a compositionally buoyant and highly viscous continental assemblage with pre-existing WCMs, analogous to the past supercontinent, is modeled and imposed on well-developed mantle convection whose vigor of convection, internal heating rate, and rheological parameters are appropriate for the Earth's mantle. The visco-plastic oceanic lithosphere and the associated subduction of oceanic plates are incorporated. The time integration of the advection of continental materials with zero chemical diffusion is performed by a tracer particle method. The time evolution of mantle convection after setting the model supercontinent is followed over 800 Myr. Earth-like continental drift is successfully reproduced, and the characteristic thermal interaction between the mantle and the continent/supercontinent is observed in my new numerical model. Results reveal that the WCM protects the cratonic lithosphere from being 1. Continental lithosphere of the Arabian Plate: A geologic, petrologic, and geophysical synthesis Science.gov (United States) Stern, Robert J.; Johnson, Peter 2010-07-01 The Arabian Plate originated ˜ 25 Ma ago by rifting of NE Africa to form the Gulf of Aden and Red Sea. It is one of the smaller and younger of the Earth's lithospheric plates. The upper part of its crust consists of crystalline Precambrian basement, Phanerozoic sedimentary cover as much as 10 km thick, and Cenozoic flood basalt (harrat). The distribution of these rocks and variations in elevation across the Plate cause a pronounced geologic and topographic asymmetry, with extensive basement exposures (the Arabian Shield) and elevations of as much as 3000 m in the west, and a Phanerozoic succession (Arabian Platform) that thickens, and a surface that descends to sea level, eastward between the Shield and the northeastern margin of the Plate. This tilt in the Plate is partly the result of marginal uplift during rifting in the south and west, and loading during collision with, and subduction beneath, the Eurasian Plate in the northeast. But a variety of evidence suggests that the asymmetry also reflects a fundamental crustal and mantle heterogeneity in the Plate that dates from Neoproterozoic time when the crust formed. The bulk of the Plate's upper crystalline crust is Neoproterozoic in age (1000-540 Ma) reflecting, in the west, a 300-million year process of continental crustal growth between ˜ 850 and 550 Ma represented by amalgamated juvenile magmatic arcs, post-amalgamation sedimentary and volcanic basins, and granitoid intrusions that make up as much as 50% of the Shield's surface. Locally, Archean and Paleoproterozoic rocks are structurally intercalated with the juvenile Neoproterozoic rocks in the southern and eastern parts of the Shield. The geologic dataset for the age, composition, and origin of the upper crust of the Plate in the east is smaller than the database for the Shield, and conclusions made about the crust in the east are correspondingly less definitive. In the absence of exposures, furthermore, nothing is known by direct observation about the 2. Proceedings. First Assembly of the Latin-America and Caribbean Seismological Commission - LACSC Directory of Open Access Journals (Sweden) Third Latin-American Congress of Seismol Third Latin-American Congress of Seismology 2014-02-01 Full Text Available The Latin-American and Caribbean region is an area with a very complex tectonic setting, where stress and strain generated by the interaction of several lithospheric plates is being absorbed. Several regional fault systems, with moderate and high activity, represent a hazard for a significant part of the population (more than 500 million inhabitants. Given the recent developments in the mining and energy industries, a great deal of exploration has been focusing on this part of the world, and the potential extraction of mineral resources is going to generate important changes in vast areas of the American continent. Considering the geodynamic framework and the expectation of the extraction of economic resources, questions about the impact of human activities and the possible destabilizing of the relevant tectonic systems are raised. Many theoretical and applied geophysical studies have been developed in different regions of Latin-America and the Caribbean, mainly since the second half of the 20th century. There have been basically two motivations to carry out these studies: The evaluation of natural hazards and the exploration of economic resources. Such studies have mainly focused on the knowledge of: (a the structure of the crust and upper mantle, (b the regional tectonic evolution, (c the local and regional seismic hazards, and (d the geometry of geologic structures of economic interest. This part of the world has witnessed an excessive and disproportionate growth in the number of urban centers, evidenced by the increase in economic and social gaps. This situation puts a great portion of the population at a high level of vulnerability, which in addition to the natural hazard in the region, configures a scenario of high seismic risk. In this academic event, the relevant results associated with the seismotectonic behavior of this part of the world will be addressed, as well as the implications of active exploration of the tectonic conditions 3. Preliminary results from receiver function analysis in a seismological network across the Pamir Science.gov (United States) Schneider, Felix M.; Yuan, Xiaohui; Sippl, Christan; Schurr, Bernd; Mechie, James; Minaev, Vlad; Oimahmadov, Ilhomjon; Gadoev, Mustafo; Abdybachaev, Ulan A. 2010-05-01 The multi-disciplinary TIen Shan-PAmir GEodynamic (TIPAGE) program aims to investigate the dynamics of the orogeny of the Tien Shan and Pamir mountains, which are situated in south Kyrgyzstan and east Tajikistan in Central Asia. Deformation and uplift accompanied by crustal thickening is mainly induced by the collision between the Indian and Eurasian continental plates. As a local feature this collision provides the world's largest active intra-continental subduction zone. Within the framework of the TIPAGE program we operate a temporary seismic array consisting of 32 broadband and 8 short period seismic stations for a period of two years (from 2008 to 2010) covering an area of 300 x 300 km over the main part of the central Pamir plateau and the Alai-range of the southern Tien Shan. In the first year 24 broadband stations were set up in a 350-km long north-south profile geometry from Osh in southern Kyrgyzstan to Zorkul in south-eastern Tajikistan with approximately 15 km station spacing. We perform a receiver function (RF) analysis of converted P and S waves from teleseismic earthquakes at epicentral distances of 35-95 degrees with a minimum magnitude of 5.5. Therefore we decompose their wavefields by rotating the coordinate systems of the recorded seismograms from a N,E,Z into a SH,SV,P system. RFs are isolated by deconvolution of the P-component from the SH- and SV-component. They provide a robust tool to locate discontinuities in wave velocity like the Moho and thus represent the method of choice to determine crustal thickness. First results show a crustal thickness of 70-80km. Xenolith findings from depths of 100km reported by Hacker et al. (2005) give indication for even higher values. The N-S profile geometry will produce a high resolution RF image to map the gross crustal and lithospheric structure. In addition a 2D network with additional 16 stations will enable an investigation of lateral structure variation. We give an introduction to the project and 4. S-Wave's Velocities of the Lithosphere-Asthenosphere System in the Caribbean Region International Nuclear Information System (INIS) Gonzalez, O'Leary; Alvarez, Jose Leonardo; Moreno, Bladimir; Panza, Giuliano F. 2010-06-01 An overview of the S-wave velocity (Vs) structural model of the Caribbean is presented with a resolution of 2 o x2 o . As a result of the frequency time analysis (FTAN) of more than 400 trajectories epicenter-stations in this region, new tomographic maps of Rayleigh waves group velocity dispersion at periods ranging from 10 s to 40 s have been determined. For each 2 o x2 o cell, group velocity dispersion curves were determined and extended to 150 s adding data from a larger scale tomographic study (Vdovin et al., 1999). Using, as independent a priori information, the available geological and geophysical data of the region, each dispersion curve has been mapped, by non-linear inversion, into a set of Vs vs. depth models in the depth range from 0 km to 300 km. Due to the non-uniqueness of the solutions for each cell a Local Smoothness Optimization (LSO) has been applied to the whole region to identify a tridimensional model of Vs vs. depth in cells of 2 o x2 o , thus satisfying the Occam razor concept. Through these models some main features of the lithosphere and asthenosphere are evidenced, such as: the west directed subduction zone of the eastern Caribbean region with a clear mantle wedge between the Caribbean lithosphere and the subducted slab; the complex and asymmetric behavior of the crustal and lithospheric thickness in the Cayman ridge; the diffused presence of oceanic crust in the region; the presence of continental type crust in the South America, Central America and North America plates, as well as the bottom of the upper asthenosphere that gets shallower going from west to east. (author) 5. Magnetotelluric investigations of the lithosphere beneath the central Rae craton, mainland Nunavut, Canada Science.gov (United States) Spratt, Jessica E.; Skulski, Thomas; Craven, James A.; Jones, Alan G.; Snyder, David B.; Kiyan, Duygu 2014-03-01 New magnetotelluric soundings at 64 locations throughout the central Rae craton on mainland Nunavut constrain 2-D resistivity models of the crust and lithospheric mantle beneath three regional transects. Responses determined from colocated broadband and long-period magnetotelluric recording instruments enabled resistivity imaging to depths of > 300 km. Strike analysis and distortion decomposition on all data reveal a regional trend of 45-53°, but locally the geoelectric strike angle varies laterally and with depth. The 2-D models reveal a resistive upper crust to depths of 15-35 km that is underlain by a conductive layer that appears to be discontinuous at or near major mapped geological boundaries. Surface projections of the conductive layer coincide with areas of high grade, Archean metasedimentary rocks. Tectonic burial of these rocks and thickening of the crust occurred during the Paleoproterozoic Arrowsmith (2.3 Ga) and Trans-Hudson orogenies (1.85 Ga). Overall, the uppermost mantle of the Rae craton shows resistivity values that range from 3000 Ω m in the northeast (beneath Baffin Island and the Melville Peninsula) to 10,000 Ω m beneath the central Rae craton, to >50,000 Ω m in the south near the Hearne Domain. Near-vertical zones of reduced resistivity are identified within the uppermost mantle lithosphere that may be related to areas affected by mantle melt or metasomatism associated with emplacement of Hudsonian granites. A regional decrease in resistivities to values of 500 Ω m at depths of 180-220 km, increasing to 300 km near the southern margin of the Rae craton, is interpreted as the lithosphere-asthenosphere boundary. 6. Heat flow, heat transfer and lithosphere rheology in geothermal areas: Features and examples Science.gov (United States) Ranalli, G.; Rybach, L. 2005-10-01 Surface heat flow measurements over active geothermal systems indicate strongly positive thermal anomalies. Whereas in "normal" geothermal settings, the surface heat flow is usually below 100-120 mW m - 2 , in active geothermal areas heat flow values as high as several watts per meter squared can be found. Systematic interpretation of heat flow patterns sheds light on heat transfer mechanisms at depth on different lateral, depth and time scales. Borehole temperature profiles in active geothermal areas show various signs of subsurface fluid movement, depending on position in the active system. The heat transfer regime is dominated by heat advection (mainly free convection). The onset of free convection depends on various factors, such as permeability, temperature gradient and fluid properties. The features of heat transfer are different for single or two-phase flow. Characteristic heat flow and heat transfer features in active geothermal systems are demonstrated by examples from Iceland, Italy, New Zealand and the USA. Two main factors affect the rheology of the lithosphere in active geothermal areas: steep temperature gradients and high pore fluid pressures. Combined with lithology and structure, these factors result in a rheological zonation with important consequences both for geodynamic processes and for the exploitation of geothermal energy. As a consequence of anomalously high temperature, the mechanical lithosphere is thin and its total strength can be reduced by almost one order of magnitude with respect to the average strength of continental lithosphere of comparable age and thickness. The top of the brittle/ductile transition is located within the upper crust at depths less than 10 km, acts as the root zone of listric normal faults in extensional environments and, at least in some cases, is visible on seismic reflection lines. These structural and rheological features are well illustrated in the Larderello geothermal field in Tuscany. 7. Helium as a tracer for fluids released from Juan de Fuca lithosphere beneath the Cascadia forearc Science.gov (United States) McCrory, Patricia A.; Constantz, James E.; Hunt, Andrew G.; Blair, James Luke 2016-01-01 The ratio between helium isotopes (3He/4He) provides an excellent geochemical tracer for investigating the sources of fluids sampled at the Earth's surface. 3He/4He values observed in 25 mineral springs and wells above the Cascadia forearc document a significant component of mantle-derived helium above Juan de Fuca lithosphere, as well as variability in 3He enrichment across the forearc. Sample sites arcward of the forearc mantle corner (FMC) generally yield significantly higher ratios (1.2-4.0 RA) than those seaward of the corner (0.03-0.7 RA). The highest ratios in the Cascadia forearc coincide with slab depths (40-45 km) where metamorphic dehydration of young oceanic lithosphere is expected to release significant fluid and where tectonic tremor occurs, whereas little fluid is expected to be released from the slab depths (25-30 km) beneath sites seaward of the corner.Tremor (considered a marker for high fluid pressure) and high RA values in the forearc are spatially correlated. The Cascadia tremor band is centered on its FMC, and we tentatively postulate that hydrated forearc mantle beneath Cascadia deflects a significant portion of slab-derived fluids updip along the subduction interface, to vent in the vicinity of its corner. Furthermore, high RA values within the tremor band just arcward of the FMC, suggest that the innermost mantle wedge is relatively permeable.Conceptual models require: (1) a deep fluid source as a medium to transport primordial 3He; (2) conduits through the lithosphere which serve to speed fluid ascent to the surface before significant dilution from radiogenic 4He can occur; and (3) near lithostatic fluid pressure to keep conduits open. Our spatial correlation between high RA values and tectonic tremor provides independent evidence that tremor is associated with deep fluids, and it further suggests that high pore pressures associated with tremor may serve to keep fractures open for 3He migration through ductile upper mantle and lower crust. 8. Lithospheric electrical structure of the middle Lhasa terrane in the south Tibetan plateau Science.gov (United States) Liang, Hongda; Jin, Sheng; Wei, Wenbo; Gao, Rui; Ye, Gaofeng; Zhang, Letian; Yin, Yaotian; Lu, Zhanwu 2018-04-01 The Lhasa terrane in southern Tibetan plateau is a huge tectono-magmatic belt and an important metallogenic belt. Its formation evolution process and mineralization are affected by the subduction of oceanic plate and subsequent continental collision. However, the evolution of Lhasa terrane has been a subject of much debate for a long time. The Lithospheric structure records the deep processes of the subduction of oceanic plate and continental collision. The magnetotelluric (MT) method can probe the sub-surface electrical conductivity, newly dense broadband and long period magnetotelluric data were collected along a south-north trending profile that across the Lhasa terrane at 88°-89°E. Dimensionality analyses demonstrated that the MT data can be interpreted using two-dimensional approaches, and the regional strike direction was determined as N110°E.Based on data analysis results, a two-dimensional (2-D) resistivity model of crust and upper mantle was derived from inversion of the transverse electric mode, transverse magnetic mode and vertical magnetic field data. Inversion model shows a large north-dipping resistor that extended from the upper crust to upper mantle beneath the Himalaya and the south of Lhasa Terrane, which may represent the subducting Indian continental lithosphere. The 31°N may be an important boundary in the Lhasa Terrane, the south performs a prominent high-conductivity anomaly from the lower crust to upper mantle which indicates the existence of asthenosphere upwelling, while the north performs a higher resistivity and may have a reworking ancient basement. The formation of the ore deposits in the study area may be related to the upwelling of the mantle material triggered by slab tearing and/or breaking off of the Indian lithosphere, and the mantle material input also contributed the total thickness of the present-day Tibetan crust. The results provide helpful constrains to understand the mechanism of the continent-continent collision and 9. Deformation and hydration state of the lithospheric mantle beneath the Styrian Basin (Pannonian Basin, Eastern Austria) Science.gov (United States) Aradi, L. E.; Hidas, K.; Kovács, I. J.; Klébesz, R.; Szabo, C. 2016-12-01 In the Carpathian-Pannonian Region, Neogene alkali basaltic volcanism occurred in six volcanic fields, from which the Styrian Basin Volcanic Field (SBVF) is the westernmost one. In this study, we present new petrographic and crystal preferred orientation (CPO) data, and structural hydroxyl ("water") contents of upper mantle xenoliths from 12 volcanic outcrops across the SBVF. The studied xenoliths are mostly coarse granular lherzolites, amphiboles are present in almost every sample and often replace pyroxenes and spinels. The peridotites are highly annealed, olivines and pyroxenes do not show significant amount of intragranular deformation. Despite the annealed texture of the peridotites, olivine CPO is unambiguous, and varies between [010]-fiber, orthogonal and [100]-fiber symmetry. The CPO of pyroxenes is coherent with coeval deformation with olivine, showing [100]OL distributed subparallel to [001]OPX. The CPO of amphiboles suggest postkinematic epitaxial overgrowth on the precursor pyroxenes. The "water" content of the studied xenoliths exhibit rather high values, up to 10, 290 and 675 ppm in olivine, ortho- and clinopyroxene, respectively. Ortho- and clinopyroxene pairs show equilibrium in all samples, however "water" loss in olivines is observed in several xenoliths. The xenoliths show equilibrium temperatures from 850 to 1100 °C, which corresponds to lithospheric mantle depths between 30 and 60 km. Equilibrium temperatures show correlation with the varying CPO symmetries and grain size: coarser grained xenoliths with [100]-fiber and orthorhombic symmetry appear in the high temperature (>1000 °C) xenoliths, which is characteristic for asthenospheric origin. Most of the samples display transitional CPO symmetry between [010]-fiber and orthogonal, which indicate extensive lithospheric deformation under varying stress field from transtensional to transpressional settings. Based on the estimated seismic properties of the studied samples, a significant part of 10. 3-D lithospheric structure and regional/residual Bouguer anomalies in the Arabia-Eurasia collision (Iran) Science.gov (United States) Jiménez-Munt, I.; Fernãndez, M.; Saura, E.; Vergés, J.; Garcia-Castellanos, D. 2012-09-01 The aim of this work is to propose a first-order estimate of the crustal and lithospheric mantle geometry of the Arabia-Eurasia collision zone and to separate the measured Bouguer anomaly into its regional and local components. The crustal and lithospheric mantle structure is calculated from the geoid height and elevation data combined with thermal analysis. Our results show that Moho depth varies from ˜42 km at the Mesopotamian-Persian Gulf foreland basin to ˜60 km below the High Zagros. The lithosphere is thicker beneath the foreland basin (˜200 km) and thinner underneath the High Zagros and Central Iran (˜140 km). Most of this lithospheric mantle thinning is accommodated under the Zagros mountain belt coinciding with the suture between two different mantle domains on the Sanandaj-Sirjan Zone. The regional gravity field is obtained by calculating the gravimetric response of the 3-D crustal and lithospheric mantle structure obtained by combining elevation and geoid data. The calculated regional Bouguer anomaly differs noticeably from those obtained by filtering or just isostatic methods. The residual gravity anomaly, obtained by subtraction of the regional components to the measured field, is analyzed in terms of the dominating upper crustal structures. Deep basins and areas with salt deposits are characterized by negative values (˜-20 mGal), whereas the positive values are related to igneous and ophiolite complexes and shallow basement depths (˜20 mGal). 11. 3D Numerical Examination of Continental Mantle Lithosphere Response to Lower Crust Eclogitization and Nearby Slab Subduction Science.gov (United States) Janbakhsh, P.; Pysklywec, R. 2017-12-01 2D numerical modeling techniques have made great contribution to understanding geodynamic processes involved in crustal and lithospheric scale deformations for the past 20 years. The aim of this presentation is to expand the scope covered by previous researchers to 3 dimensions to address out-of-plane intrusion and extrusion of mantle material in and out of model space, and toroidal mantle wedge flows. In addition, 3D velocity boundary conditions can create more realistic models to replicate real case scenarios. 3D numerical experiments that will be presented are designed to investigate the density and viscosity effects of lower crustal eclogitization on the decoupling process of continental mantle lithosphere from the crust and its delamination. In addition, these models examine near-field effects of a subducting ocean lithosphere and a lithospheric scale fault zone on the evolution of the processes. The model solutions and predictions will also be compared against the Anatolian geology where subduction of Aegean and Arabian slabs, and the northern boundary with the North Anatolian Fault Zone are considered as two main contributing factors to anomalous crustal uplift, missing mantle lithosphere, and anomalous surface heat flux. 12. Lateral displacement of crustal units relative to underlying mantle lithosphere: Example from the Bohemian Massif Czech Academy of Sciences Publication Activity Database 2017-01-01 Roč. 48, December (2017), s. 125-138 ISSN 1342-937X R&D Projects: GA ČR GAP210/12/2381; GA MŠk(CZ) LD15029; GA MŠk LM2010008; GA MŠk(CZ) LM2015079 Institutional support: RVO:67985530 Keywords : Bohemian Massif * Teplá-Barrandian mantle lithosphere * Zone Erbendorf-Vohenstrauss * Jáchymov Fault Zone Subject RIV: DC - Siesmology, Volcanology, Earth Structure OBOR OECD: Volcanology Impact factor: 6.959, year: 2016 13. Nature of the basement of the East Anatolian plateau: Implications for the lithospheric foundering processes Science.gov (United States) Topuz, G.; Candan, O.; Zack, T.; Yılmaz, A. 2017-12-01 The East Anatolian Plateau (Turkey) is characterized by (1) an extensive volcanic-sedimentary cover of Neogene to Quaternary age, (2) crustal thicknesses of 42-50 km, and (3) an extremely thinned lithospheric mantle. Its basement beneath the young cover is thought to consist of oceanic accretionary complexes of Late Cretaceous to Oligocene age. The attenuated state of the lithospheric mantle and the causes of the young volcanism are accounted for by slab steepening and subsequent break-off. We present field geological, petrological and geochronological data on three basement inliers (Taşlıçay, Akdağ and Ilıca) in the region. These areas are made up of amphibolite- to granulite-facies rocks, comprising marble, amphibolite, metapelite, quartzite and metagranite. The granulite-facies domain is equilibrated at 0.7 GPa and 800 ˚C at 83 ± 2 Ma (2σ). The metamorphic rocks are intruded by subduction-related coeval gabbroic, quartz monzonitic to tonalitic rocks. Both the metamorphic rocks and the intrusions are tectonically overlain by ophiolitic rocks. All these crystalline rocks are unconformably overlain by lower Maastrichtien clastic rocks and reefal limestone, suggesting that the exhumation at the earth's surface and juxtaposition with ophiolitic rocks occurred by early Maastrichtien. U-Pb dating on igneous zircon from metagranite yielded a protolith age of 445 ± 10 Ma (2σ). The detrital zircons from a metaquartzite point to Neoproterozoic to Early Paleozoic provenance. All these data favor a more or less continuous continental substrate to the allochthonous ophiolitic rocks beneath the young volcanic-sedimentary cover. The metamorphism and coeval magmatism can be regarded as the middle- to lower-crustal root of the Late Cretaceous magmatic arc that developed due to northward subduction along the Bitlis-Zagros suture. The presence of a continental basement beneath the young cover requires that the loss of the lithospheric mantle from beneath the East 14. Late Miocene Pacific plate kinematic change explained with coupled global models of mantle and lithosphere dynamics DEFF Research Database (Denmark) Stotz, Ingo Leonardo; Iaffaldano, Giampiero; Davies, DR 2017-01-01 and the consequent subduction polarity reversal. The uncertainties associated with the timing of this event, however, make it difficult to quantitatively demonstrate a dynamical association. Here, we first reconstruct the Pacific plate's absolute motion since the mid-Miocene (15 Ma), at high-temporal resolution....../lithosphere system to test hypotheses on the dynamics driving this change. These indicate that the arrival of the OJP at the Melanesian arc, between 10 and 5 Ma, followed by a subduction polarity reversal that marked the initiation of subduction of the Australian plate underneath the Pacific realm, were the key...... drivers of this kinematic change.... 15. Spatial patterns in the distribution of kimberlites: relationship to tectonic processes and lithosphere structure DEFF Research Database (Denmark) Chemia, Zurab; Artemieva, Irina; Thybo, Hans 2015-01-01 Since the discovery of diamonds in kimberlite-type rocks more than a century ago, a number of theories regarding the processes involved in kimberlite emplacement have been put forward to explain the unique properties of kimberlite magmatism. Geological data suggests that pre-existing lithosphere...... of establishing characteristic scales for the stage 1 and stage 2 processes. To reveal similarities between the Kimberlite data we use the density-based clustering technique, such as density-based spatial clustering of applications with noise (DBSCAN), which is efficient for large data sets, requires one input... 16. The role of mechanical heterogeneities during continental breakup: a 3D lithospheric-scale modelling approach Science.gov (United States) Duclaux, Guillaume; Huismans, Ritske S.; May, Dave 2015-04-01 How and why do continents break? More than two decades of analogue and 2D plane-strain numerical experiments have shown that despite the origin of the forces driving extension, the geometry of continental rifts falls into three categories - or modes: narrow rift, wide rift, or core complex. The mode of extension itself is strongly influenced by the rheology (and rheological behaviour) of the modelled layered system. In every model, an initial thermal or mechanical heterogeneity, such as a weak seed or a notch, is imposed to help localise the deformation and avoid uniform stretching of the lithosphere by pure shear. While it is widely accepted that structural inheritance is a key parameter for controlling rift localisation - as implied by the Wilson Cycle - modelling the effect of lithospheric heterogeneities on the long-term tectonic evolution of an extending plate in full 3D remains challenging. Recent progress in finite-element methods applied to computational tectonics along with the improved accessibility to high performance computers, now enable to switch from plane strain thermo-mechanical experiments to full 3D high-resolution experiments. Here we investigate the role of mechanical heterogeneities on rift opening, linkage and propagation during extension of a layered lithospheric systems with pTatin3d, a geodynamics modeling package utilising the material-point-method for tracking material composition, combined with a multigrid finite-element method to solve heterogeneous, incompressible visco-plastic Stokes problems. The initial model setup consists in a box of 1200 km horizontally by 250 km deep. It includes a 35 km layer of continental crust, underlaid by 85 km of sub-continental lithospheric mantle, and an asthenospheric mantle. Crust and mantle have visco-plastic rheologies with a pressure dependent yielding, which includes strain weakening, and a temperature, stress, strain-rate-dependent viscosity based on wet quartzite rheology for the crust, and wet 17. Bridging the gap between the deep Earth and lithospheric gravity field Science.gov (United States) Root, B. C.; Ebbing, J.; Martinec, Z.; van der Wal, W. 2017-12-01 Global gravity field data obtained by dedicated satellite missions can be used to study the density distribution of the lithosphere. The gravitational signal from the deep Earth is usually removed by high-pass filtering of the data. However, this will also remove any long-wavelength signal of the lithosphere. Furthermore, it is still unclear what value for the truncation limit is best suited. An alternative is to forward model the deep situated mass anomalies and subtract the gravitational signal from the observed data. This requires knowledge of the mantle mass anomalies, dynamic topography, and CMB topography. Global tomography provides the VS distribution in the mantle, which is related to the density distribution in the mantle. There are difficulties in constructing a density model from this data. Tomography relies on regularisation which smoothens the mantle anomalies. Also, the VS anomalies need to be converted to density anomalies with uncertain conversion factors. We study the observed reduction in magnitude of the density anomalies due to the regularisation of the global tomography models. The reduced magnitude of the anomalies cannot be recovered by increasing the conversion factor from VS-to-density transformation. The reduction of the tomographic results seems to resemble the effect of a spatial Gaussian filter. By determining the spectral difference between tomographic and gravimetric models a reverse filter can be constructed to reproduce correct density variations in the complete mantle. The long-wavelengths of the global tomography models are less affected by the regularisation and can fix the value of the conversion factor. However, the low degree gravity signals are also dominated by the D" region. Therefore, different approaches are used to determine the effect of this region on the gravity field. The density anomalies in the mantle, as well as the effect of CMB undulations, are forward modelled into their gravitational potential field, such that 18. Structure of the lithosphere-asthenosphere and volcanism in the Tyrrhenian Sea and surroundings International Nuclear Information System (INIS) Panza, G.F.; Aoudia, A.; Pontevivo, A.; Sarao, A.; Peccerillo, A. 2003-01-01 The Italian peninsula and the Tyrrhenian Sea are some of the geologically most complex regions on Earth. Such a complexity is expressed by large lateral and vertical variations of the physical properties as inferred from the lithosphere-asthenosphere structure and by the wide varieties of Polio-Quaternary magmatic rocks ranging from teacloth to calcalkaline to sodium- and potassium-alkaline and ultra- alkaline compositions. The integration of geophysical, petrological and geochemical data allows us to recognise various sectors in the Tyrrhenian Sea and surrounding areas and compare different volcanic complexes in order to better constrain the regional geodynamics. A thin crust overlying a soft mantle (10% of partial melting) is typical of the back arc volcanism of the central Tyrrhenian Sea (Magnaghi, Vavilov and Marsili) where tholeiitic rocks dominate. Similar lithosphere-asthenosphere structure is observed for Ustica, Vulture and Etna volcanoes where the geochemical signatures could be related to the contamination of the side intraplate mantle by material coming from the either ancient or active roll-back. The lithosphere-asthenosphere structure and geochemical-isotopic composition do not change significantly when we move to the Stromboli-Campanian volcanoes, where we identify a well developed low-velocity layer, about 10 km thick, below a thin lid, overlain by a thin continental crust. The geochemical signature of the nearby Ischia volcano is characteristic of the Campanian sector and the relative lithosphere-asthenosphere structure may likely represent a transition to the back arc volcanism sector acting in the central Tyrrhenian. The difference in terms of structure beneath Stromboli and the nearby Vulcano and Lipari is confirmed by different geochemical signatures. The affinity between Vulcano, Lipari and Etna could be explained by their common position along the Tindari-Letoianni-Malta fault zone. A low velocity mantle wedge, just below the Moho, is present 19. Origin and Distribution of Water Contents in Continental and Oceanic Lithospheric Mantle Science.gov (United States) Peslier, Anne H. 2013-01-01 The water content distribution of the upper mantle will be reviewed as based on the peridotite record. The amount of water in cratonic xenoliths appears controlled by metasomatism while that of the oceanic mantle retains in part the signature of melting events. In both cases, the water distribution is heterogeneous both with depth and laterally, depending on localized water re-enrichments next to melt/fluid channels. The consequence of the water distribution on the rheology of the upper mantle and the location of the lithosphere-asthenosphere boundary will also be discussed. 20. Lithospheric Structure of Northeastern Tibet Plateau from P and S Receiver Functions Science.gov (United States) Zhang, C.; Guo, Z.; Chen, Y. J. 2017-12-01 We obtain the lithospheric structure of the Northeast Tibet (NE Tibet) along an N-S trending profile using P- and S-wave receiver function recorded by ChinArray-Himalaya II project. Both P- and S-receiver function migration images show highly consistent lithospheric features. The Moho depth is estimated to be 50 km beneath the Songpan-ganzi (SPGZ) and Qaidam-Kunlun-West Qinling (QD) blocks with little or no fluctuation. However, at the northern boundary of QD, the crust abruptly uplifts to 40 km depth within a distance of 50 km. Meanwhile, at the southernmost of QD, the Moho is found at the depth of 60 km, which forms a double Moho conversion beneath the western Qinling fault (WQF). At the Qilian block, the first order feature of the PRF image is the northward crustal thinning from 60 km to 45 km. The strong Moho fluctuations beneath the Qilian block reflects the on-going mountain building processes. Further to the north, the Moho depth begins to deepen to 55 km and then gradually thins to 40 km at the Alxa block. We observe significant Moho variations at the Central Asian Orogenic belt (CAOB). Furthermore, Moho jumps and offsets are shown beneath major thrust and strike-slip faults zones, such as the a >5 km Moho uplift across the North Qilian Fault (NQF), implying that these faults cut through the crust and partly accommodate the continuous deformation/crustal shorting that is propagated from the India-Eurasia collision. Strong negative signals found in both P and S receiver functions at around 100-150 km depth can be interpreted as the lithosphere-asthenosphere boundary (LAB). The LAB deepens from 100 km at the northern to a maximum of 150 km at the southern end of the CAOB. A relatively flat LAB with the depth of 150 km is shown beneath the Alax block, and then it gradually thins to 100 km from the QD to SPGZ. Beneath the SPGZ, our results indicate a thin and flat lithosphere ( 100 km). 1. Lithospheric Structure of Antarctica and Implications for Geological and Cryospheric Evolution Science.gov (United States) Wiens, Douglas; Heeszel, David; Sun, Xinlei; Lloyd, Andrew; Nyblade, Andrew; Anandakrishnan, Sridhar; Aster, Richard; Chaput, Julien; Huerta, Audrey; Hansen, Samantha; Wilson, Terry 2013-04-01 Recent broadband seismic deployments, including the AGAP/GAMSEIS array of 24 broadband seismographs over the Gamburtsev Subglacial Mountains (GSM) in East Antarctica and the POLENET/ANET deployment of 33 seismographs across much of West Antarctica, reveal the detailed crust and upper mantle structure of Antarctica for the first time. The seismographs operate year-around even in the coldest parts of Antarctica, due to novel insulated boxes, power systems, and modified instrumentation developed in collaboration with the IRIS PASSCAL Instrument Center. We analyze the data using several different techniques to develop high-resolution models of Antarctic seismic structure. We use Rayleigh wave phase velocities at periods of 20-180 s determined using a modified two-plane wave decomposition of teleseismic Rayleigh waves to invert for the three dimensional shear velocity structure. In addition, Rayleigh wave group and phase velocities obtained by ambient seismic noise correlation methods provide constraints at shorter periods and shallower depths. Receiver functions provide precise estimates of crustal structure beneath the stations, and P and S wave tomography provides models of upper mantle structure down to ~ 500 km depth along transects of greater seismic station density. The new seismic results show that the high elevations of the GSM are supported by thick crust (~ 55 km), and are underlain by thick Precambrian continental lithosphere that initially formed during Archean to mid-Proterozoic times. The absence of lithospheric thermal anomalies suggests that the mountains were formed by a compressional orogeny during the Paleozoic, thus providing a locus for ice sheet nucleation throughout a long period of geological time. Within West Antarctica, the crust and lithosphere are extremely thin near the Transantarctic Mountain Front and topographic lows such as the Bentley Trench and Byrd Basin, which represent currently inactive Cenozoic rift systems. Slow seismic 2. Diploma in Seismology for High-School Teachers in Mexico Through an Open-Source Learning Plataform Science.gov (United States) Perez-Campos, X.; Bello, D.; Dominguez, J.; Pérez, J.; Cruz, J. L.; Navarro Estrada, F.; Mendoza Carvajal, A. D. J. 2017-12-01 The high school Physics programs in Mexico do not consider the immediate application of the concepts learned by the students. According to some pedagogical theories many of the acquired knowledge are assimilated when experimenting, expressing, interacting and developing projects. It is in high school when young people are exploring and looking for experiences to decide the area in which they want to focus their studies. The areas of science and engineering are chosen, mainly motivated by technology and outer space. There is little interest in Earth science, reflected by the number of students in those areas. This may be due mainly to the lack of exposure and examples at the high school level. With this in mind, we are working on a project that seeks, through the preparation of teachers of this level, to bring their students to seismology and awaken in them their curiosity in issues related to it. Based on the above, and taking as examples the successful programs "Seismographs in Schools" from IRIS and "Geoscience Information For Teachers" from EGU, the Mexican National Seismological Service has launched a project that contemplates three stages. The first one consists of the design and delivery of a diploma addressed to high school teachers. The second contemplates the installation of short-period seismographs in each of the participating faculty facilities. Finally, the third one involves the active participation of teachers and their students in research projects based on the data collected in the instruments installed in their schools. This work presents the first phase. The diploma has been designed to offer teachers, in 170 hours, an introduction to topics related to seismology and to provide them with tools and examples that they can share with their students in their classroom. It is offered both online through Moodle, an open-source learning plataform, and in 12 classroom sessions. The first class started on June 2017 and will finish on November 2017. We 3. Severity Classification of a Seismic Event based on the Magnitude-Distance Ratio Using Only One Seismological Station Directory of Open Access Journals (Sweden) Luis Hernán Ochoa Gutiérrez 2014-07-01 Full Text Available Seismic event characterization is often accomplished using algorithms based only on information received at seismological stations located closest to the particular event, while ignoring historical data received at those stations. These historical data are stored and unseen at this stage. This characterization process can delay the emergency response, costing valuable time in the mitigation of the adverse effects on the affected population. Seismological stations have recorded data during many events that have been characterized by classical methods, and these data can be used as previous "knowledge" to train such stations to recognize patterns. This knowledge can be used to make faster characterizations using only one three-component broadband station by applying bio-inspired algorithms or recently developed stochastic methods, such as kernel methods. We trained a Support Vector Machine (SVM algorithm with seismograph data recorded by INGEOMINAS's National Seismological Network at a three-component station located near Bogota, Colombia. As input model descriptors, we used the following: (1 the integral of the Fourier transform/power spectrum for each component, divided into 7 windows of 2 seconds and beginning at the P onset time, and (2 the ratio between the calculated logarithm of magnitude (Mb and epicentral distance. We used 986 events with magnitudes greater than 3 recorded from late 2003 to 2008. The algorithm classifies events with magnitude-distance ratios (a measure of the severity of possible damage caused by an earthquake greater than a background value. This value can be used to estimate the magnitude based on a known epicentral distance, which is calculated from the difference between P and S onset times. This rapid (< 20 seconds magnitude estimate can be used for rapid response strategies. The results obtained in this work confirm that many hypocentral parameters and a rapid location of a seismic event can be obtained using a few 4. Crustal seismicity and the earthquake catalog maximum moment magnitudes (Mcmax) in stable continental regions (SCRs): correlation with the seismic velocity of the lithosphere Science.gov (United States) Mooney, Walter D.; Ritsema, Jeroen; Hwang, Yong Keun 2012-01-01 A joint analysis of global seismicity and seismic tomography indicates that the seismic potential of continental intraplate regions is correlated with the seismic properties of the lithosphere. Archean and Early Proterozoic cratons with cold, stable continental lithospheric roots have fewer crustal earthquakes and a lower maximum earthquake catalog moment magnitude (Mcmax). The geographic distribution of thick lithospheric roots is inferred from the global seismic model S40RTS that displays shear-velocity perturbations (δVS) relative to the Preliminary Reference Earth Model (PREM). We compare δVS at a depth of 175 km with the locations and moment magnitudes (Mw) of intraplate earthquakes in the crust (Schulte and Mooney, 2005). Many intraplate earthquakes concentrate around the pronounced lateral gradients in lithospheric thickness that surround the cratons and few earthquakes occur within cratonic interiors. Globally, 27% of stable continental lithosphere is underlain by δVS≥3.0%, yet only 6.5% of crustal earthquakes with Mw>4.5 occur above these regions with thick lithosphere. No earthquakes in our catalog with Mw>6 have occurred above mantle lithosphere with δVS>3.5%, although such lithosphere comprises 19% of stable continental regions. Thus, for cratonic interiors with seismically determined thick lithosphere (1) there is a significant decrease in the number of crustal earthquakes, and (2) the maximum moment magnitude found in the earthquake catalog is Mcmax=6.0. We attribute these observations to higher lithospheric strength beneath cratonic interiors due to lower temperatures and dehydration in both the lower crust and the highly depleted lithospheric root. 5. Crustal seismicity and the earthquake catalog maximum moment magnitude (Mcmax) in stable continental regions (SCRs): Correlation with the seismic velocity of the lithosphere Science.gov (United States) Mooney, Walter D.; Ritsema, Jeroen; Hwang, Yong Keun 2012-12-01 A joint analysis of global seismicity and seismic tomography indicates that the seismic potential of continental intraplate regions is correlated with the seismic properties of the lithosphere. Archean and Early Proterozoic cratons with cold, stable continental lithospheric roots have fewer crustal earthquakes and a lower maximum earthquake catalog moment magnitude (Mcmax). The geographic distribution of thick lithospheric roots is inferred from the global seismic model S40RTS that displays shear-velocity perturbations (δVS) relative to the Preliminary Reference Earth Model (PREM). We compare δVS at a depth of 175 km with the locations and moment magnitudes (Mw) of intraplate earthquakes in the crust (Schulte and Mooney, 2005). Many intraplate earthquakes concentrate around the pronounced lateral gradients in lithospheric thickness that surround the cratons and few earthquakes occur within cratonic interiors. Globally, 27% of stable continental lithosphere is underlain by δVS≥3.0%, yet only 6.5% of crustal earthquakes with Mw>4.5 occur above these regions with thick lithosphere. No earthquakes in our catalog with Mw>6 have occurred above mantle lithosphere with δVS>3.5%, although such lithosphere comprises 19% of stable continental regions. Thus, for cratonic interiors with seismically determined thick lithosphere (1) there is a significant decrease in the number of crustal earthquakes, and (2) the maximum moment magnitude found in the earthquake catalog is Mcmax=6.0. We attribute these observations to higher lithospheric strength beneath cratonic interiors due to lower temperatures and dehydration in both the lower crust and the highly depleted lithospheric root. 6. Magnetotelluric Imaging of Lower Crustal Melt and Lithospheric Hydration in the Rocky Mountain Front Transition Zone, Colorado, USA Science.gov (United States) Feucht, D. W.; Sheehan, A. F.; Bedrosian, P. A. 2017-12-01 We present an electrical resistivity model of the crust and upper mantle from two-dimensional (2-D) anisotropic inversion of magnetotelluric data collected along a 450 km transect of the Rio Grande rift, southern Rocky Mountains, and High Plains in Colorado, USA. Our model provides a window into the modern-day lithosphere beneath the Rocky Mountain Front to depths in excess of 150 km. Two key features of the 2-D resistivity model are (1) a broad zone ( 200 km wide) of enhanced electrical conductivity (minerals, with maximum hydration occurring beneath the Rocky Mountain Front. This lithospheric "hydration front" has implications for the tectonic evolution of the continental interior and the mechanisms by which water infiltrates the lithosphere. 7. Implications for anomalous mantle pressure and dynamic topography from lithospheric stress patterns in the North Atlantic Realm Science.gov (United States) Schiffer, Christian; Nielsen, Søren Bom 2016-08-01 With convergent plate boundaries at some distance, the sources of the lithospheric stress field of the North Atlantic Realm are mainly mantle tractions at the base of the lithosphere, lithospheric density structure and topography. Given this, we estimate horizontal deviatoric stresses using a well-established thin sheet model in a global finite element representation. We adjust the lithospheric thickness and the sub-lithospheric pressure iteratively, comparing modelled in plane stress with the observations of the World Stress Map. We find that an anomalous mantle pressure associated with the Iceland and Azores melt anomalies, as well as topography are able to explain the general pattern of the principle horizontal stress directions. The Iceland melt anomaly overprints the classic ridge push perpendicular to the Mid Atlantic ridge and affects the conjugate passive margins in East Greenland more than in western Scandinavia. The dynamic support of topography shows a distinct maximum of c. 1000 m in Iceland and amounts <150 m along the coast of south-western Norway and 250-350 m along the coast of East Greenland. Considering that large areas of the North Atlantic Realm have been estimated to be sub-aerial during the time of break-up, two components of dynamic topography seem to have affected the area: a short-lived, which affected a wider area along the rift system and quickly dissipated after break-up, and a more durable in the close vicinity of Iceland. This is consistent with the appearance of a buoyancy anomaly at the base of the North Atlantic lithosphere at or slightly before continental breakup, relatively fast dissipation of the fringes of this, and continued melt generation below Iceland. 8. Convective removal of the Tibetan Plateau mantle lithosphere by 26 Ma Science.gov (United States) Lu, Haijian; Tian, Xiaobo; Yun, Kun; Li, Haibing 2018-04-01 During the late Oligocene-early Miocene there were several major geological events in and around the Tibetan Plateau (TP). First, crustal shortening deformation ceased completely within the TP before 25 Ma and instead adakitic rocks and potassic-ultrapotassic volcanics were emplaced in the Lhasa terrane since 26-25 Ma. Several recent paleoelevation reconstructions suggest an Oligocene-early Miocene uplift of 1500-3000 m for the Qiangtang (QT) and Songpan-Ganzi (SG) terranes, although the exact timing is unclear. As a possible response to this uplift, significant desertification occurred in the vicinity of the TP at 26-22 Ma, and convergence between India and Eurasia slowed considerably at 26-20 Ma. Subsequently, E-W extension was initiated no later than 18 Ma in the Lhasa and QT terranes. In contrast, the tectonic deformation around the TP was dominated by radial expansion of shortening deformation since 25-22 Ma. The plateau-wide near-synchroneity of these events calls for an internally consistent model which can be best described as convective removal of the lower mantle lithosphere. Geophysical and petrochemical evidence further confirms that this extensive removal occurred beneath the QT and SG terranes. The present review concludes that,
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6223875284194946, "perplexity": 5526.5360827401055}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152168.38/warc/CC-MAIN-20210727010203-20210727040203-00484.warc.gz"}
https://www.qfak.com/education_reference/science_mathematics/?id=2955930
Physic Homework...College Intro? Two blocks, each of mass m = 2.5 kg, are pushed along the horizontal surface of a table by a horizontal force P of magnitude 6.9 N, directed to the right, as shown in the figure below. The blocks move together to the right at constant velocity. (a) Find the frictional force exerted on the lower block by the table. (b) Find the coefficient of kinetic friction between the surface of the block and the table. (c) Find the frictional force acting on the upper block. (a) Since the blocks are being moved at constant speed, then the forces are "balanced" making the friction force equal to the force P, so P = f = 6.9N. (b) Kinetic friction is given as: f = μn Solved for μ: μ = f / n = f / 2mg------------------>treating the two blocks as one = 6.9N / 2(2.5kg)(9.8m/s2) = 0.14 (c) The frictional force acting on the upper block is static friction and must also equal the force P or it would slide to the left. The question doesn't ask for it, but the static friction coefficient must be: μ ≤ f/mg ≤ 6.9N / (2.5kg)(9.8m/s2) μ ≤ 0.28 Hope this helps. #1
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.966541588306427, "perplexity": 671.6076322779795}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987841291.79/warc/CC-MAIN-20191024040131-20191024063631-00327.warc.gz"}
https://worldwidescience.org/topicpages/s/source+category+barking.html
#### Sample records for source category barking 1. 40 CFR 98.410 - Definition of the source category. Science.gov (United States) 2010-07-01 ... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Industrial Greenhouse Gases § 98.410 Definition of the source category. (a) The industrial gas supplier source category consists of any facility that... 2. 40 CFR 98.110 - Definition of the source category. Science.gov (United States) 2010-07-01 ... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Ferroalloy Production § 98.110 Definition of the source category. The ferroalloy production source category consists of any facility that uses pyrometallurgical techniques to produce any of the following metals: ferrochromium, ferromanganese, ferromolybdenum... 3. Intra-urban biomonitoring: Source apportionment using tree barks to identify air pollution sources. Science.gov (United States) Moreira, Tiana Carla Lopes; de Oliveira, Regiani Carvalho; Amato, Luís Fernando Lourenço; Kang, Choong-Min; Saldiva, Paulo Hilário Nascimento; Saiki, Mitiko 2016-05-01 It is of great interest to evaluate if there is a relationship between possible sources and trace elements using biomonitoring techniques. In this study, tree bark samples of 171 trees were collected using a biomonitoring technique in the inner city of São Paulo. The trace elements (Al, Ba, Ca, Cl, Cu, Fe, K, Mg, Mn, Na, P, Rb, S, Sr and Zn) were determined by the energy dispersive X-ray fluorescence (EDXRF) spectrometry. The Principal Component Analysis (PCA) was applied to identify the plausible sources associated with tree bark measurements. The greatest source was vehicle-induced non-tailpipe emissions derived mainly from brakes and tires wear-out and road dust resuspension (characterized with Al, Ba, Cu, Fe, Mn and Zn), which was explained by 27.1% of the variance, followed by cement (14.8%), sea salt (11.6%) and biomass burning (10%), and fossil fuel combustion (9.8%). We also verified that the elements related to vehicular emission showed different concentrations at different sites of the same street, which might be helpful for a new street classification according to the emission source. The spatial distribution maps of element concentrations were obtained to evaluate the different levels of pollution in streets and avenues. Results indicated that biomonitoring techniques using tree bark can be applied to evaluate dispersion of air pollution and provide reliable data for the further epidemiological studies. Copyright © 2016 Elsevier Ltd. All rights reserved. 4. Strategies towards sustainable bark sourcing as raw material for ... African Journals Online (AJOL) SARAH 2017-07-31 Jul 31, 2017 ... Warbugia salutaris bark is used to treat opportunistic ... local communities and households (Shackleton,. 2015). ... tree size are necessary for analysis of the impact of ... due to human influence, it is alternated with a mosaic of ... hardness” to be removed from wood were noted. ..... flow and poor water supply. 5. 6 Source Categories - Boilers (Proposed Action) Science.gov (United States) EPA is proposing options to simplify the Clean Air Act permitting process for certain smaller sources of air pollution commonly found in Indian country. This action would ensure that air quality in Indian country is protected. 6. 40 CFR 98.360 - Definition of the source category. Science.gov (United States) 2010-07-01 ... this rule. (b) A manure management system (MMS) is a system that stabilizes and/or stores livestock... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Manure Management § 98.360 Definition of the source category. (a) This source category consists of livestock facilities with manure management systems that emit 25... 7. 40 CFR 98.70 - Definition of source category. Science.gov (United States) 2010-07-01 ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Definition of source category. 98.70... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Ammonia Manufacturing § 98.70 Definition of source category...-based feedstock produced via steam reforming of a hydrocarbon. (b) Ammonia manufacturing processes in... 8. 40 CFR 98.40 - Definition of the source category. Science.gov (United States) 2010-07-01 ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Definition of the source category. 98... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Electricity Generation § 98.40 Definition of the source... category does not include portable equipment, emergency equipment, or emergency generators, as defined in... 9. 40 CFR 98.420 - Definition of the source category. Science.gov (United States) 2010-07-01 ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Definition of the source category. 98.420 Section 98.420 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS... distribution of CO2. (4) Purification, compression, or processing of CO2. (5) On-site use of CO2 captured on... 10. Task-Modulated Cortical Representations of Natural Sound Source Categories DEFF Research Database (Denmark) Hjortkjær, Jens; Kassuba, Tanja; Madsen, Kristoffer Hougaard 2018-01-01 In everyday sound environments, we recognize sound sources and events by attending to relevant aspects of an acoustic input. Evidence about the cortical mechanisms involved in extracting relevant category information from natural sounds is, however, limited to speech. Here, we used functional MRI... 11. 40 CFR 98.240 - Definition of the source category. Science.gov (United States) 2010-07-01 ... makes methanol, hydrogen, and/or ammonia from synthesis gas is part of the petrochemical source category... hydrogen recovered as product and ammonia. The facility is part of subpart P of this part (Hydrogen... levels of both methanol and ammonia. The facility is part of subpart G of this part (Ammonia... 12. Larix decidua Bark as a Source of Phytoconstituents: An LC-MS Study Directory of Open Access Journals (Sweden) Valeria Baldan 2017-11-01 Full Text Available Larix decidua bark is a waste of the timber industry and is widely diffused in Northern Italy. This material can be considered a good source of antioxidants and phytoconstituents with possible use in cosmetic or nutraceutical products. In this study, simple extraction of larch bark was performed using mixtures of ethanol/water. Furthermore, the phytochemical composition of larch bark extract was studied using LC-MSn methods and the main constituents were identified as flavonoids, spiro-polyphenols, and procyanidins. To confirm the identification by LC-MS semi-preparative HPLC was performed in order to isolate the main constituents and verify the structures by 1H-NMR. Antioxidant properties were studied using an in vitro approach combining DPPH assay and LC-MS in order to establish different roles of the various classes of phytochemicasl of the extract. DPPH activity of some of the isolated compounds was also assessed. The overall results indicate this waste material as a good source of antioxidant compounds, mainly procyanidins, whichresulted the most active constituents in the DPPH assay. 13. 76 FR 4155 - National Emission Standards for Hazardous Air Pollutants for Source Categories: Gasoline... Science.gov (United States) 2011-01-24 ... 63 National Emission Standards for Hazardous Air Pollutants for Source Categories: Gasoline Distribution Bulk Terminals, Bulk Plants, and Pipeline Facilities; and Gasoline Dispensing Facilities; Final...] RIN 2060-AP16 National Emission Standards for Hazardous Air Pollutants for Source Categories: Gasoline... 14. Preliminary data summary for the hospitals point-source category International Nuclear Information System (INIS) Strassler, E.; Hund, F.H. 1989-09-01 The summaries were prepared in order to allow EPA to respond to the mandate of Section 304(m) of the Clean Water Act. Summaries for categories already subject to rulemaking were developed for comparison purposes, and contain only the minimum amount of data needed to provide some perspective on the relative magnitude of the pollution problems created across the categories. The document summarizes the most current information available regarding the discharge of wastewater and solid wastes containing priority and hazardous non-priority pollutants by hospitals. The document provides a technical basis for determining whether additional national regulations should be developed pursuant to the Clean Water Act (CWA), and makes available preliminary information regarding the discharge of priority and hazardous non-priority pollutants by the hospital industry 15. Bark as potential source of chemical substances for industry: analysis of content of selected phenolic compounds Czech Academy of Sciences Publication Activity Database Maršík, Petr; Kotyza, Jan; Rezek, Jan; Vaněk, Tomáš -, č. 1 (2013), s. 4-9 ISSN 1804-0195 R&D Projects: GA MŠk(CZ) OC10026 Institutional research plan: CEZ:AV0Z50380511 Keywords : bark * extraction * phenolic compounds Subject RIV: EI - Biotechnology ; Bionics http://www. waste forum.cz/cisla/WF_1_2013.pdf#page=4 16. Tactics at the category level of purchasing and supply management: sourcing levers, contingencies and performance OpenAIRE Hesping, Frank 2015-01-01 For the ‘front-line’ purchasing agent, it is obvious that not all categories of products and supplier relationships should be managed in the same way. Rather, in a modern category management approach, firms group similar products into ‘sourcing categories’ forming coherent supply markets (e.g., ‘metal sheets’, ‘leather’, ‘displays’, ‘cables’, etc.). Thus, to achieve cost reduction targets, a tailored mix of tactical sourcing levers for each sourcing category may be required. The fundamental q... 17. The potential of elemental and isotopic analysis of tree bark for discriminating sources of airborne lead contamination in the UK. Science.gov (United States) Bellis, D; McLeod, C W; Satake, K 2001-02-01 Samples of tree bark, which accumulate airborne material, were collected from seven locations in the UK to provide an indication of the magnitude and source of lead pollution. Measurement of the Pb content and 206/207Pb stable isotope ratio by inductively coupled plasma mass spectrometry revealed significant differences between the sites. The concentration of Pb varied over almost four orders of magnitude from 7.2 to 9,600 micrograms g-1, the maximum values being found near a 'secondary' Pb smelter. The 206/207Pb isotope ratios varied from 1.108 +/- 0.002 to 1.169 +/- 0.001. The lowest Pb concentrations and highest isotope ratios were detected in bark samples from the Scilly Isles, reflecting the low-level of industry and road traffic. In contrast, samples obtained from a city centre (Sheffield) and near a motorway (M1) contained 25-46 micrograms g-1 Pb and recorded the lowest 206/207Pb ratios. Higher concentrations in the vicinity of a coal-fired power station recorded a 206/207Pb ratio of 1.14, suggesting a significant contribution from fly-ash. The relative contribution of lead from petrol (206/207Pb = 1.08) and other sources such as coal (206/207Pb = 1.18) were thus estimated using mass balance equations. Tree bark near the lead smelter recorded an intermediate 206/207Pb ratio of 1.13 reflecting the processing of material of mixed origin. 18. The Role of External Sources of Information in Children's Evaluative Food Categories Science.gov (United States) Nguyen, Simone P. 2012-01-01 Evaluative food categories are value-laden assessments, which reflect the healthfulness and palatability of foods (e.g. healthy/unhealthy, yummy/yucky). In a series of three studies, this research examines how 3- to 4-year-old children (N?=?147) form evaluative food categories based on input from external sources of information. The results… 19. Tactics at the category level of purchasing and supply management: sourcing levers, contingencies and performance NARCIS (Netherlands) Hesping, Frank 2015-01-01 For the ‘front-line’ purchasing agent, it is obvious that not all categories of products and supplier relationships should be managed in the same way. Rather, in a modern category management approach, firms group similar products into ‘sourcing categories’ forming coherent supply markets (e.g., 20. Fungal Volatiles Can Act as Carbon Sources and Semiochemicals to Mediate Interspecific Interactions Among Bark Beetle-Associated Fungal Symbionts. Directory of Open Access Journals (Sweden) Jonathan A Cale Full Text Available Mountain pine beetle (Dendroctonus ponderosae has killed millions of hectares of pine forests in western North America. Beetle success is dependent upon a community of symbiotic fungi comprised of Grosmannia clavigera, Ophiostoma montium, and Leptographium longiclavatum. Factors regulating the dynamics of this community during pine infection are largely unknown. However, fungal volatile organic compounds (FVOCs help shape fungal interactions in model and agricultural systems and thus may be important drivers of interactions among bark beetle-associated fungi. We investigated whether FVOCs can mediate interspecific interactions among mountain pine beetle's fungal symbionts by affecting fungal growth and reproduction. Headspace volatiles were collected and identified to determine species-specific volatile profiles. Interspecific effects of volatiles on fungal growth and conidia production were assessed by pairing physically-separated fungal cultures grown either on a carbon-poor or -rich substrate, inside a shared-headspace environment. Fungal VOC profiles differed by species and influenced the growth and/or conidia production of the other species. Further, our results showed that FVOCs can be used as carbon sources for fungi developing on carbon-poor substrates. This is the first report demonstrating that FVOCs can drive interactions among bark beetle fungal symbionts, and thus are important factors in beetle attack success. 1. 77 FR 11390 - Delegation of National Emission Standards for Hazardous Air Pollutants for Source Categories; Nevada Science.gov (United States) 2012-02-27 ... Source Categories. Subpart ZZZZZZ--NESHAP: Area Source Standards for Aluminum, Copper, and Other... Perchloroethylene Dry X X X Cleaning. N Hard and Decorative X X X Chromium Electroplating and Chromium Anodizing... Publishing X X X Industry. LL Primary Aluminum Reduction X X Plants. MM Chemical Recovery X X Combustion... 2. 40 CFR Table 1 to Subpart Xxxxxx... - Description of Source Categories Affected by This Subpart Science.gov (United States) 2010-07-01 ... heating units, combination gas-oil burners, oil or gas swimming pool heaters, heating apparatus (except... supplies industry sector of this source category includes establishments primarily engaged in high energy...), coke and gas burning salamanders, liquid or gas solar energy collectors, solar heaters, space heaters... 3. Willow Bark Science.gov (United States) ... willow bark extract, ginger root concentrate, boswellia extract, turmeric root extract, cayenne, and hyaluronic acid (Instaflex Joint ... Sensitivity to aspirin: People with ASTHMA, STOMACH ULCERS, DIABETES, GOUT, HEMOPHILIA, HYPOPROTHROMBINEMIA, or KIDNEY or LIVER DISEASE ... 4. Current status of securing Category 1 and 2 radioactive sources in Taiwan Energy Technology Data Exchange (ETDEWEB) Cheng, Y-F.; Tsai, C-H. [Atomic Energy Council of Executive Yuan of Taiwan (China) 2014-07-01 For enhancing safe and secure management of Category 1 and 2 radioactive sources against theft or unauthorized removal, AEC (Atomic Energy Council) of Taiwan have been regulating the import/export of the sources ever since 2005, in compliance with the IAEA's (International Atomic Energy Agency) 'Guidance on the Import and Export of Radioactive Sources'. Furthermore in consulting the IAEA Nuclear Security Series No.11 report, administrative regulations on the program of securing the sources have been embodied into AECL's regulatory system since 2012, for the purpose of enforcing medical and non-medical licensees and industrial radiographers to establish their own radioactive source security programs. Regulations require that security functions such as access control, detection, delay, response and communication and security management measures are to be implemented within the programs. This paper is to introduce the current status in implementing the security control measures in Taiwan. (author) 5. Safety issues in the handling of radiation sources in category IV gamma radiation facilities International Nuclear Information System (INIS) Kohli, A.K. 2002-01-01 6. Cork Containing Barks - a review Science.gov (United States) Leite, Carla; Pereira, Helena 2016-12-01 Tree barks are among the less studied forest products notwithstanding their relevant physiological and protective role in tree functioning. The large diversity in structure and chemical composition of barks makes them a particularly interesting potential source of chemicals and bio-products, at present valued in the context of biorefineries. One of the valuable components of barks is cork (phellem in anatomy) due to a rather unique set of properties and composition. Cork from the cork oak (Quercus suber) has been extensively studied, mostly because of its economic importance and worldwide utilization of cork products. However, several other species have barks with substantial cork amounts that may constitute additional resources for cork-based bioproducts. This paper makes a review of the tree species that have barks with significant proportion of cork and on the available information regarding their bark structural and chemical characterization. A general integrative appraisal of the formation and types of barks and of cork development is also given. The knowledge gaps and the potential interesting research lines are identified and discussed, as well as the utilization perspectives. 7. Anthropogenic Sulfur Dioxide Emissions, 1850-2005: National and Regional Data Set by Source Category, Version 2.86 Data.gov (United States) National Aeronautics and Space Administration — The Anthropogenic Sulfur Dioxide Emissions, 1850-2005: National and Regional Data Set by Source Category, Version 2.86 provides annual estimates of anthropogenic... 8. Isolation of suberin from birch outer bark and cork using ionic liquids: A new source of macromonomers OpenAIRE Ferreira, Rui; Garcia, Helga; Sousa, Andreia F.; Freire, Carmen S. R.; Silvestre, Armando J. D.; Rebelo, Luis Paulo N.; Pereira, Cristina Silva 2013-01-01 Cholinium hexanoate, a biocompatible and biodegradable ionic liquid, was recently demonstrated to efficiently and selectively extract suberin domains from cork, combining high extraction efficiency with isolation of a partial depolymerised material. In the present paper, we report a comparative study of the characterisation of suberin extracted from birch outer bark and from cork using cholinium hexanoate. It became apparent that both extracted suberin samples showed still a cross-linked natu... 9. Anaerobic biodegradability of Category 2 animal by-products: methane potential and inoculum source. Science.gov (United States) Pozdniakova, Tatiana A; Costa, José C; Santos, Ricardo J; Alves, M M; Boaventura, Rui A R 2012-11-01 Category 2 animal by-products that need to be sterilized with steam pressure according Regulation (EC) 1774/2002 are studied. In this work, 2 sets of experiments were performed in mesophilic conditions: (i) biomethane potential determination testing 0.5%, 2.0% and 5.0% total solids (TS), using sludge from the anaerobic digester of a wastewater treatment plant as inoculum; (ii) biodegradability tests at a constant TS concentration of 2.0% and different inoculum sources (digested sludge from a wastewater treatment plant; granular sludge from an upflow anaerobic sludge blanket reactor; leachate from a municipal solid waste landfill; and sludge from the slaughterhouse wastewater treatment anaerobic lagoon) to select the more adapted inoculum to the substrate in study. The higher specific methane production was of 317 mL CH(4)g(-1) VS(substrate) for 2.0% TS. The digested sludge from the wastewater treatment plant led to the lowest lag-phase period and higher methane potential rate. Copyright © 2012 Elsevier Ltd. All rights reserved. 10. Journey of a Package: Category 1 Source (Co-60) Shipment with Several Border Crossings, Multiple Modes International Nuclear Information System (INIS) Gray, P. A. 2016-01-01 Radioactive materials (RAM) are used extensively in a vast array of industries and in an even wider breadth of applications on a truly global basis each and every day. Over the past 50 years, these applications and the quantity (activity) of RAM shipped has grown significantly, with the next 50 years expected to show a continuing trend. The movement of these goods occurs in all regions of the world, and must therefore be conducted in a manner which will not adversely impact people or the environment. Industry and regulators have jointly met this challenge, so much so that RAM shipments are amongst the safest of any product. How has this level of performance been achieved? What is involved in shipping RAM from one corner of the world to another, often via a number of in-transit locations and often utilizing multiple modes of transport in any single shipment? This paper reviews one such journey, of Category 1 Cobalt-60 sources, as they move from point of manufacture through to point of use including the detailed and multi-approval process, the stringent regulatory requirements in place, the extensive communications required throughout, and the practical aspects needed to simply offer such a product for sale and transport. Upon completion, the rationale for such an exemplary safety and security record will be readily apparent. (author) 11. Multiple use of bark Energy Technology Data Exchange (ETDEWEB) Byzov, V I; Trestsov, A B 1979-01-01 A brief review of possible uses of the 130,000 cubic meters of bark produced annually by mills in the Mari ASSR. Present uses include tar production from birch bark and tannins from spruce bark. Several uses are suggested that require little capital expenditure: infill of roads, gullies etc.; fertilizers for market gardens and orchards; and bark/cement slabs. The manufacture is described of a new bark/cement slab suitable for low buildings, that uses milled green bark of spruce and pine. 12. Spectral and temporal cues for perception of material and action categories in impacted sound sources DEFF Research Database (Denmark) Hjortkjær, Jens; McAdams, Stephen 2016-01-01 In two experiments, similarity ratings and categorization performance with recorded impact sounds representing three material categories (wood, metal, glass) being manipulated by three different categories of action (drop, strike, rattle) were examined. Previous research focusing on single impact...... correlated with the pattern of confusion in categorization judgments. Listeners tended to confuse materials with similar spectral centroids, and actions with similar temporal centroids and onset densities. To confirm the influence of these different features, spectral cues were removed by applying... 13. Beech Bark Disease Science.gov (United States) David R. Houston; James T. O' Brien 1983-01-01 Beech bark disease causes significant mortality and defect in American beech, Fagus grandifolia (Ehrh.). The disease results when bark, attacked and altered by the beech scale, Cryptococcus fagisuga Lind., is invaded and killed by fungi, primarily Nectria coccinea var. faginata Lohman, Watson, and Ayers, and sometimes N. galligena Bres. 14. Loblolly pine bark flavanoids Science.gov (United States) J.J. Karchesy; R.W. Hemingway 1980-01-01 The inner bark of Pinus taeda L. contains (+)-catechin, the procyanidin 8.1 (a C-4 to C-8 linked (-)-epicatechin to (+)-catechin dimer), and three polymeric procyanidins that have distinctly different solubility and chromatographic properties. An ethyl acetate soluble polymer (0.20% of bark, Mn = 1200) was purified by chromatography on LH-20 Sephadex. A water-soluble... 15. Barking and mobbing. Science.gov (United States) Lord, Kathryn; Feinstein, Mark; Coppinger, Raymond 2009-07-01 Barking is most often associated with the domestic dog Canis familiaris, but it is a common mammalian and avian vocalization. Like any vocalization, the acoustic character of the bark is likely to be a product of adaptation as well as an expression of the signaler's internal motivational state. While most authors recognize that the bark is a distinct signal type, no consistent description of its acoustic definition or function is apparent. The bark exhibits considerable variability in its acoustic form and occurs in a wide range of behavioral contexts, particularly in dogs. This has led some authors to suggest that dog barking might be a form of referential signaling, or an adaptation for heightened capability to communicate with humans. In this paper we propose a general 'canonical' acoustic description of the bark. Surveying relevant literature on dogs, wild canids, other mammals and birds, we explore an alternative functional hypothesis, first suggested by [Morton, E.S., 1977. On the occurrence and significance of motivation-structural rules in some bird and mammal sounds. Am. Nat. 111, 855-869] and consistent with his motivational-structural rules theory: that barking in many animals, including the domestic dog, is associated with mobbing behavior and the motivational states that accompany mobbing. 16. Methodologies for estimating air emissions from three non-traditional source categories: Oil spills, petroleum vessel loading and unloading, and cooling towers. Final report, October 1991-March 1993 International Nuclear Information System (INIS) Ramadan, W.; Sleva, S.; Dufner, K.; Snow, S.; Kersteter, S.L. 1993-04-01 The report discusses part of EPA's program to identify and characterize emissions sources not currently accounted for by either the existing Aerometric Information Retrieval System (AIRS) or State Implementation Plan (SIP) area source methodologies and to develop appropriate emissions estimation methodologies and emission factors for a group of these source categories. Based on the results of the identification and characterization portions of this research, three source categories were selected for methodology and emission factor development: oil spills, petroleum vessel loading and unloading, and cooling towers. The report describes the category selection process and presents emissions estimation methodologies and emission factor data for the selected source categories. The discussions for each category include general background information, emissions generation activities, pollutants emitted, sources of activity and pollutant data, emissions estimation methodologies and data issues. The information used in these discussions was derived from various sources including available literature, industrial and trade association publications and contracts, experts on the category and activity, and knowledgeable federal and state personnel 17. Quantifying sources of variation in the frequency of fungi associated with spruce beetles: implications for hypothesis testing and sampling methodology in bark beetle-symbiont relationships. Science.gov (United States) Brian H. Aukema; Richard A. Werner; Kirsten E. Haberkern; Barbara L. Illman; Murray K. Clayton; Kenneth F. Raffa 2005-01-01 The spruce beetle, Dendroctonus rufipennis (Kirby), causes landscape level mortality to mature spruce (Picea spp.) throughout western and northern North America. As with other bark beetles, this beetle is associated with a variety of fungi, whose ecological functions are largely unknown. It has been proposed that the relative... 18. Living on the Bark Indian Academy of Sciences (India) of bark provides a waterproof layer on which water drops contain- ing fungal spores ..... Grey squirrel (Sciurus carolinensis and S. griseus), red squir- rel (S. vulgaris ... cotton (Abroma angustum) is useful in treatment of gynaecological ailments. 19. Bark chemical analysis explains selective bark damage by rodents Czech Academy of Sciences Publication Activity Database Heroldová, Marta; Jánová, Eva; Suchomel, J.; Purchart, L.; Homolka, Miloslav 2009-01-01 Roč. 2, č. 2 (2009), s. 137-140 ISSN 1803-2451 R&D Projects: GA MZe QH72075 Institutional research plan: CEZ:AV0Z60930519 Keywords : bark damage * bark selection * bark chemical analysis * rowan * beech * spruce * mountain forest regeneration Subject RIV: GK - Forestry 20. Antimicrobial screening of ethnobotanically important stem bark of medicinal plants. Science.gov (United States) Singh, Meenakshi; Khatoon, Sayyada; Singh, Shweta; Kumar, Vivek; Rawat, Ajay Kumar Singh; Mehrotra, Shanta 2010-07-01 The stem barks are the rich sources of tannins and other phenolic compounds. Tannins inhibited the growth of various fungi, yeast, bacteria and virus. Hence, ten stem barks of ethnomedicinally important plants were screened for antibacterial and antifungal activities against human pathogenic strains. Air-dried and powdered stem bark of each plant was extracted with 50% aqueous ethanol, lyophilized and the dried crude extracts were used for the screening against 11 bacteria and 8 fungi. Antibacterial and antifungal activities were performed according to microdilution methods by NCCLS. The plants Prosopis chilensis, Pithecellobium dulce, Mangifera indica showed significant antibacterial and antifungal activities against Streptococcus pneumonia, Enterobacter aerogenes, Klebsiella pneumonia and Candida albicans with MIC of 0.08mg/ml. Pithecellobium dulce bark also showed significant antibacterial activity against Bacillus cereus. The bark of Pithecellobium dulce has more or less similar activity against the known antibiotic and may be considered as potent antimicrobial agent for various infectious diseases. 1. Efficient dewatering of bark in heated presses. Survey and pilot-scale trials; Effektivare avvattning av bark i vaermda pressar. Problemkartering samt foersoek i pilotskala Energy Technology Data Exchange (ETDEWEB) Haakansson, Martin; Stenstroem, Stig (Lund Inst. of Technology, Lund (SE)) 2007-12-15 Dewatering and drying of biofuels such as bark and GROT have received increased importance due to an increased interest to use these products as energy sources. In Sweden there are about 30 bark presses installed, however the amount of available information is very limited about dewatering of bark. The goal with this work is to increase the knowledge about dewatering of bark. Two separate goals have been defined in the project: A. Survey about problems related to dewatering of bark and compilation of operating experiences at Swedish mills. B. Study how different parameters affect bark dewatering at pilot scale experiments. Study different techniques for heating bark and the bark pressing process. The results will mainly be of interest for mills which are handling bark, for municipal power plants who buy wet forest residues (bark, GROT etc.) and for manufacturers of industrial bark pressing equipment. The results show that the dry matter content for birch- and pine bark normally are so high that pressing does not result in dewatering of the barks. Both dry and wet debarking is used and these bark fractions should be pressed separately. On line measurement of the dry matter content for the bark should be used as a standard tool on the bark press. This will facilitate improved control of the bark press during the year. Other conclusions are that smaller bark particles result in an increased dry matter content, large bark- and wood pieces decrease the dewatering in the bark press and that the total residence time in the press nip should be at least 30 seconds. The most common method to take care of bark water is to send it to the evaporators or to the water purification plant. Maintenance of the bark press appears not to be a big problem. Hot pressing can be accomplished in different ways, either the bark press can be heated or the bark can be heated in different ways. The alternatives that have been studied in this project are steaming the bark, heating the bark using 2. Contested Categories DEFF Research Database (Denmark) Drawing on social science perspectives, Contested Categories presents a series of empirical studies that engage with the often shifting and day-to-day realities of life sciences categories. In doing so, it shows how such categories remain contested and dynamic, and that the boundaries they create... 3. Bark is the Hallmark Indian Academy of Sciences (India) water. c) The phelloderm: Cells of the phelloderm layer are produced on the inner side of the phellogen .... brown or grey in colour. Table 1. continued . ... tracted from the dried Cinchona bark are used in the treatment of malarial fevers and are ... 4. stem bark in rodents African Journals Online (AJOL) STORAGESEVER 2008-05-02 May 2, 2008 ... The effect of the extract on the normal intestinal transit in mice was not significant. However, in the ... kunthianum stem bark was therefore investigated in mice and rats' in vivo ..... sons, London, 11: 544. Izzo AA, Nicoletti M, ... 5. Liver transplantation from maastricht category 2 non-heart-beating donors: a source to increase the donor pool? Science.gov (United States) Otero, A; Gómez-Gutiérrez, M; Suárez, F; Arnal, F; Fernández-García, A; Aguirrezabalaga, J; García-Buitrón, J; Alvarez, J; Máñez, R 2004-04-01 The demand for liver transplantation has increasingly exceeded the supply of cadaver donor organs. Non-heart-beating donors (NHBDs) may be an alternative to increase the cadaver donor pool. The outcome of 20 liver transplants from Maastricht category 2 NHBD was compared with that of 40 liver transplants from heart-beating donors (HBDs). After unsuccessful cardiopulmonary resuscitation (CPR), cardiopulmonary support with simultaneous application of chest and abdominal compression (CPS; n = 6) or cardiopulmonary bypass (CPB; n = 14) was used to maintain the donors. At a minimum follow-up of 2 years, actuarial patient and graft survival rates with livers from Maastricht category 2 NHBD were 80% and 55%, respectively. Transplantation of organs from these donors was associated with a significantly higher incidence of primary nonfunction, biliary complications, and more severe initial liver dysfunction compared with organs from HBDs. The graft survival rates was 83% for livers from NHBDs preserved with CPS and 42% in those maintained with CPB. 6. Comparative analgesic activity of the root bark, stem bark, leaves ... African Journals Online (AJOL) The analgesic activity of the water extracts (50,100 and150 mg/Kg body weight) of the root bark, stem bark, leaves, fruits and seeds of Carissa edulis were evaluated in mice using the mechanical method (tail-chip method) and chemical method (acetic acid induced writhing). The plant was found to have analgesic activity, ... 7. Development document for the effluent limitations and guidelines for the ore mining and dressing point source category. Volume I. Final report International Nuclear Information System (INIS) Jarrett, B.M.; Kirby, R.G. 1978-07-01 To establish effluent limitation guidelines and standards of performance, the ore mining and dressing industry was divided into 41 separate categories and subcategories for which separate limitations were recommended. This report deals with the entire metal-ore mining and dressing industry and examines the industry by ten major categories: iron ore; copper ore; lead and zinc ores; gold ore; silver ore; bauxite ore; ferroalloy-metal ores; mercury ores; uranium, radium and vanadium ores; and metal ores, not elsewhere classified ((ores of antimony, beryllium, pltinum, rare earths, tin, titanium, and zirconium). The subcategorization of the ore categories is based primarily upon ore mineralogy and processing or extraction methods employed; however, other factors (such as size, climate or location, and method of mining) are used in some instances. With the best available technology economically achievable, facilities in 21 of the 41 subcategories can be operated with no discharge of process wastewater to navigable waters. No discharge of process wastewater is also achievable as a new source performance standard for facilities in 21 of the 41 subcategories 8. Air pollution assessment using tree barks as biomonitors Energy Technology Data Exchange (ETDEWEB) Santos, Eliane C.; Saiki, Mitiko, E-mail: [email protected], E-mail: [email protected] [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil) 2017-07-01 In the last decades tree barks have become a very common bioindicator of air pollution because of its several advantages over other bioindicators. In the present study, tree barks were collected from different sites of Metropolitan Region of São Paulo (MRSP) and from two control sites far away from MRSP. The barks were analyzed by neutron activation analysis (NAA) for determinations of As, Br, Ca, Cl, Co, Cr, Cs, Fe, K, La, Mg, Mn, Ni, Rb, Sb, Sc, V and Zn and for Cd, Cu and Pb by graphite furnace absorption spectrometry (GF AAS). Results obtained for samples collected in different sampling sites in the MRSP presented wide variability due to the different pollutants levels that each tree was exposed to. High concentrations of Cd, Pb, Sb and Zn were obtained in tree barks sampled close to high vehicular traffic. The principal components analysis (PCA) applied a identify four possible emission sources, soil resuspension plus vehicular emission, industrial, marine aerosols as well as the tree bark structure itself. The enrichment factor (EF) results indicated that all the elements originated from anthropic sources, with the exception of Cs. The cluster analyses indicated no significant differences between MRSP and control sites were observed with regards to characteristics of element emissions, probably due to the control sites are located also in urban areas. The results of certified reference material analyses indicated that NAA and GF AAS provided reliable data for element concentrations with standardized differences, |Z score| < 2. (author) 9. Air pollution assessment using tree barks as biomonitors International Nuclear Information System (INIS) Santos, Eliane C.; Saiki, Mitiko 2017-01-01 In the last decades tree barks have become a very common bioindicator of air pollution because of its several advantages over other bioindicators. In the present study, tree barks were collected from different sites of Metropolitan Region of São Paulo (MRSP) and from two control sites far away from MRSP. The barks were analyzed by neutron activation analysis (NAA) for determinations of As, Br, Ca, Cl, Co, Cr, Cs, Fe, K, La, Mg, Mn, Ni, Rb, Sb, Sc, V and Zn and for Cd, Cu and Pb by graphite furnace absorption spectrometry (GF AAS). Results obtained for samples collected in different sampling sites in the MRSP presented wide variability due to the different pollutants levels that each tree was exposed to. High concentrations of Cd, Pb, Sb and Zn were obtained in tree barks sampled close to high vehicular traffic. The principal components analysis (PCA) applied a identify four possible emission sources, soil resuspension plus vehicular emission, industrial, marine aerosols as well as the tree bark structure itself. The enrichment factor (EF) results indicated that all the elements originated from anthropic sources, with the exception of Cs. The cluster analyses indicated no significant differences between MRSP and control sites were observed with regards to characteristics of element emissions, probably due to the control sites are located also in urban areas. The results of certified reference material analyses indicated that NAA and GF AAS provided reliable data for element concentrations with standardized differences, |Z score| < 2. (author) 10. Organizational Categories as Viewing Categories OpenAIRE Mik-Meyer, Nanna 2005-01-01 This paper explores how two Danish rehabilitation organizations textual guidelines for assessment of clients’ personality traits influence the actual evaluation of clients. The analysis will show how staff members produce institutional identities corresponding to organizational categories, which very often have little or no relevance for the clients evaluated. The goal of the article is to demonstrate how the institutional complex that frames the work of the organizations produces the client ... 11. Attribution of aerosol radiative forcing over India during the winter monsoon to emissions from source categories and geographical regions Science.gov (United States) Verma, S.; Venkataraman, C.; Boucher, O. 2011-08-01 We examine the aerosol radiative effects due to aerosols emitted from different emission sectors (anthropogenic and natural) and originating from different geographical regions within and outside India during the northeast (NE) Indian winter monsoon (January-March). These studies are carried out through aerosol transport simulations in the general circulation (GCM) model of the Laboratoire de Météorologie Dynamique (LMD). The model estimates of aerosol single scattering albedo (SSA) show lower values (0.86-0.92) over the region north to 10°N comprising of the Indian subcontinent, Bay of Bengal, and parts of the Arabian Sea compared to the region south to 10°N where the estimated SSA values lie in the range 0.94-0.98. The model estimated SSA is consistent with the SSA values inferred through measurements on various platforms. Aerosols of anthropogenic origin reduce the incoming solar radiation at the surface by a factor of 10-20 times the reduction due to natural aerosols. At the top-of-atmosphere (TOA), aerosols from biofuel use cause positive forcing compared to the negative forcing from fossil fuel and natural sources in correspondence with the distribution of SSA which is estimated to be the lowest (0.7-0.78) from biofuel combustion emissions. Aerosols originating from India and Africa-west Asia lead to the reduction in surface radiation (-3 to -8 W m -2) by 40-60% of the total reduction in surface radiation due to all aerosols over the Indian subcontinent and adjoining ocean. Aerosols originating from India and Africa-west Asia also lead to positive radiative effects at TOA over the Arabian Sea, central India (CNI), with the highest positive radiative effects over the Bay of Bengal and cause either negative or positive effects over the Indo-Gangetic plain (IGP). 12. Consumer Product Category Database Science.gov (United States) The Chemical and Product Categories database (CPCat) catalogs the use of over 40,000 chemicals and their presence in different consumer products. The chemical use information is compiled from multiple sources while product information is gathered from publicly available Material Safety Data Sheets (MSDS). EPA researchers are evaluating the possibility of expanding the database with additional product and use information. 13. Identification and characterization of five non-traditional-source categories: Catastrophic/accidental releases, vehicle repair facilities, recycling, pesticide application, and agricultural operations. Final report, September 1991-September 1992 International Nuclear Information System (INIS) Sleva, S.; Pendola, J.A.; McCutcheon, J.; Jones, K.; Kersteter, S.L. 1993-03-01 The work is part of EPA's program to identify and characterize emissions sources not currently accounted for by either the existing Aerometric Information Retrieval System (AIRS) or State Implementation Plans (SIP) area source methodologies and to develop appropriate emissions estimation methodologies and emission factors for a group of these source categories. Based on the results of the identification and characterization portions of the research, five source categories were selected for methodology and emission factor development: catastrophic/accidental releases, vehicle repair facilities, recycling, pesticide application and agricultural operations. The report presents emissions estimation methodologies and emission factor data for the selected source categories. The discussions for each selected category include general background information, emissions generation activities, pollutants emitted, sources of activity and pollutant data, emissions estimation methodologies, issues to be considered and recommendations. The information used in these discussions was derived from various sources including available literature, industrial and trade association publications and contracts, experts on the category and activity, and knowledgeable federal and state personnel 14. Tree physiology and bark beetles Science.gov (United States) Michael G. Ryan; Gerard Sapes; Anna Sala; Sharon Hood 2015-01-01 Irruptive bark beetles usually co-occur with their co-evolved tree hosts at very low (endemic) population densities. However, recent droughts and higher temperatures have promoted widespread tree mortality with consequences for forest carbon, fire and ecosystem services (Kurz et al., 2008; Raffa et al., 2008; Jenkins et al., 2012). In this issue of New Phytologist,... 15. Chemical profiling and biological activity analysis of cone, bark and needle of Pinus roxburghii collected from Nepal Directory of Open Access Journals (Sweden) Rupak Thapa 2018-03-01 Conclusions: This study showed that among that needle, cone and bark of Pinus roxburghii as a huge source of biological active metabolites. Furthermore, bark extract revealed the presence of diverse chemical constituent. [J Complement Med Res 2018; 7(1.000: 66-75 16. Spatial distributions and enantiomeric signatures of DDT and its metabolites in tree bark from agricultural regions across China. Science.gov (United States) Niu, Lili; Xu, Chao; Zhang, Chunlong; Zhou, Yuting; Zhu, Siyu; Liu, Weiping 2017-10-01 Tree bark is considered as an effective passive sampler for estimating the atmospheric status of pollutants. In this study, we conducted a national scale tree bark sampling campaign across China. Concentration profiles revealed that Eastern China, especially the Jing-Jin-Ji region (including Hebei Province, Beijing and Tianjin) was a hot spot of bark DDT pollution. The enantioselective accumulation of o,p'-DDT was observed in most of the samples and 68% of them showed a preferential depletion of (+)-o,p'-DDT. These results suggest that DDTs in rural bark are likely from combined sources including historical technical DDTs and fresh dicofol usage. The tree bulk DDT levels were found to correlate with soil DDT concentrations, socioeconomy and PM 2.5 of the sampling sites. It thus becomes evident that the reemission from soils and subsequent atmospheric deposition were the major pathways leading to the accumulation of DDTs in bark. Based on a previously established bark-air partitioning model, the concentrations of DDTs in the air were estimated from measured concentrations in tree bark, and the results were comparable to those obtained by the use of passive sampling with polyurethane foam (PUF) disks. Our results demonstrate the feasibility of delineating the spatial variations in atmospheric concentration and tracing sources of DDTs by integrating the use of tree bark with enantiomeric analysis. Copyright © 2017 Elsevier Ltd. All rights reserved. 17. Condensed Tannins from Longan Bark as Inhibitor of Tyrosinase: Structure, Activity, and Mechanism. Science.gov (United States) Chai, Wei-Ming; Huang, Qian; Lin, Mei-Zhen; Ou-Yang, Chong; Huang, Wen-Yang; Wang, Ying-Xia; Xu, Kai-Li; Feng, Hui-Ling 2018-01-31 In this study, the content, structure, antityrosinase activity, and mechanism of longan bark condensed tannins were evaluated. The findings obtained from mass spectrometry demonstrated that longan bark condensed tannins were mixtures of procyanidins, propelargonidins, prodelphinidins, and their acyl derivatives (galloyl and p-hydroxybenzoate). The enzyme analysis indicated that these mixtures were efficient, reversible, and mixed (competitive is dominant) inhibitor of tyrosinase. What's more, the mixtures showed good inhibitions on proliferation, intracellular enzyme activity and melanogenesis of mouse melanoma cells (B 16 ). From molecular docking, the results showed the interactions between inhibitors and tyrosinase were driven by hydrogen bond, electrostatic, and hydrophobic interactions. In addition, high levels of total phenolic and extractable condensed tannins suggested that longan bark might be a good source of tyrosinase inhibitor. This study would offer theoretical basis for the development of longan bark condensed tannins as novel food preservatives and medicines of skin diseases. 18. Retrospective determination of {sup 137}Cs specific activity distribution in spruce bark and bark aggregated transfer factor in forests on the scale of the Czech Republic ten years after the Chernobyl accident Energy Technology Data Exchange (ETDEWEB) Suchara, I., E-mail: [email protected] [Silva Tarouca Research Institute for Landscape and Ornamental Gardening, Kvetnove namesti 391, CZ 252 43 Pruhonice (Czech Republic); Rulik, P., E-mail: [email protected] [National Radiation Protection Institute, Bartoskova 28, CZ 140 00 Prague 4 (Czech Republic); Hulka, J., E-mail: [email protected] [National Radiation Protection Institute, Bartoskova 28, CZ 140 00 Prague 4 (Czech Republic); Pilatova, H., E-mail: [email protected] [National Radiation Protection Institute, Bartoskova 28, CZ 140 00 Prague 4 (Czech Republic) 2011-04-15 The {sup 137}Cs specific activities (mean 32 Bq kg{sup -1}) were determined in spruce bark samples that had been collected at 192 sampling plots throughout the Czech Republic in 1995, and were related to the sampling year. The {sup 137}Cs specific activities in spruce bark correlated significantly with the {sup 137}Cs depositions in areas affected by different precipitation sums operating at the time of the Chernobyl fallout in 1986. The ratio of the {sup 137}Cs specific activities in bark and of the {sup 137}Cs deposition levels yielded bark aggregated transfer factor T{sub ag} about 10.5 x 10{sup -3} m{sup -2} kg{sup -1}. Taking into account the residual specific activities of {sup 137}Cs in bark 20 Bq kg{sup -1} and the available pre-Chernobyl data on the {sup 137}Cs deposition loads on the soil surface in the Czech Republic, the real aggregated transfer factor after and before the Chernobyl fallout proved to be T*{sub ag} = 3.3 x 10{sup -3} m{sup -2} kg{sup -1} and T**{sub ag} = 4.0 x 10{sup -3} m{sup -2} kg{sup -1}, respectively. The aggregated transfer factors T*{sub ag} for {sup 137}Cs and spruce bark did not differ significantly in areas unequally affected by the {sup 137}Cs fallout in the Czech Republic in 1986, and the figures for these aggregated transfer factors were very similar to the mean bark T{sub ag} values published from the extensively affected areas near Chernobyl. The magnitude of the {sup 137}Cs aggregated transfer factors for spruce bark for the pre-Chernobyl and post-Chernobyl period in the Czech Republic was also very similar. The variability in spruce bark acidity caused by the operation of local anthropogenic air pollution sources did not significantly influence the accumulation and retention of {sup 137}Cs in spruce bark. Increasing elevation of the bark sampling plots had a significant effect on raising the remaining {sup 137}Cs specific activities in bark in areas affected by precipitation at the time when the plumes crossed, because 19. Sources International Nuclear Information System (INIS) Duffy, L.P. 1991-01-01 This paper discusses the sources of radiation in the narrow perspective of radioactivity and the even narrow perspective of those sources that concern environmental management and restoration activities at DOE facilities, as well as a few related sources. Sources of irritation, Sources of inflammatory jingoism, and Sources of information. First, the sources of irritation fall into three categories: No reliable scientific ombudsman to speak without bias and prejudice for the public good, Technical jargon with unclear definitions exists within the radioactive nomenclature, and Scientific community keeps a low-profile with regard to public information. The next area of personal concern are the sources of inflammation. This include such things as: Plutonium being described as the most dangerous substance known to man, The amount of plutonium required to make a bomb, Talk of transuranic waste containing plutonium and its health affects, TMI-2 and Chernobyl being described as Siamese twins, Inadequate information on low-level disposal sites and current regulatory requirements under 10 CFR 61, Enhanced engineered waste disposal not being presented to the public accurately. Numerous sources of disinformation regarding low level radiation high-level radiation, Elusive nature of the scientific community, The Federal and State Health Agencies resources to address comparative risk, and Regulatory agencies speaking out without the support of the scientific community 20. Antioxidant Constituents from the Bark of Aglaia eximia (Meliaceae Directory of Open Access Journals (Sweden) Julinton Sianturi 2016-03-01 Full Text Available The genus Aglaia is a a rich source of different compounds with interesting biological activities. A part of our continuing search for novel biologically active compounds from Indonesia Aglaia plants, the ethyl acetate extract of bark of Aglaia eximia showed significant antioxidant activity. Four antioxidant compounds, kaempferol (1, kaempferol-3-O-α-L-rhamnoside (2, kaempferol-3-O-β-D-glucoside (3 and kaempferol-3-O-β-D-glucosyl-(1→4-α-L-rhamnoside (4 were isolated from the bark of Aglaia eximia (Meliaceae. The chemical structures of compounds 1-4 were identified on the basis of spectroscopic datas including UV, IR, NMR and MS along with by comparison with those spectra datas previously reported. All compounds showed DPPH radical-scavenging activity with IC50 values of 1.18, 6.34, 8.17, 10.63 mg/mL, respectively. 1. Example Annual Certification & Compliance Reports for Sources with and without Visible Emissions Testing: NESHAP Area Source Standards for Nine Metal Fabrication and Finishing Source Categories 40 CFR 63 Subpart XXXXXX Science.gov (United States) This page contains examples of the type of information that must be submitted to fulfill the Notification of Compliance Status requirement of 40 CFR 63, subpart XXXXXX for sources reporting and not reporting visible emissions information. 2. Anti-pseudomonas activity of essential oil, total extract, and proanthocyanidins of Pinus eldarica Medw. bark. Science.gov (United States) 2016-01-01 Pinus eldarica Medw. (Iranian pine) is native to Transcaucasian region and has been vastly planted in Iran, Afghanistan, and Pakistan. Various parts of this plant have been widely used in traditional medicine for the treatment of various diseases including infectious conditions (e.g. infectious wounds). In this study we aimed to investigate the antibacterial activity of P. eldarica bark extract, essential oil and proanthocyanidins on three important bacteria, Staphylococcus aureus, Escherichia coli and Pseudomonas aeruginosa. Antibacterial analysis was performed using standard disk diffusion method with different concentrations of essential oil, bark total hydroalcoholic extract, and bark proanthocyanidins (0.5, 1, 2 and 3 mg/ml). After incubation at 37°C for 24 h, the antibacterial activity was assessed by measuring the zone of growth inhibition surrounding the disks. The results indicated that the essential oil, total hydroalcoholic extract, and proanthocyanidins of the bark of the P. eldarica were effective against the gram negative bacteria, P. aeruginosa, and significantly inhibited its growth in disk diffusion method (Pessential oil had the most potent inhibitory effect. However, none of the bark preparations could significantly inhibit the growth of S. aureus or E. coli. Our findings showed that P. eldarica bark components have significant anti-pseudomonas activity having potentials for new sources of antibacterial agents or antibacterial herbal preparations. 3. Production and characterization of nanospheres of bacterial cellulose from Acetobacter xylinum from processed rice bark International Nuclear Information System (INIS) Goelzer, F.D.E.; Faria-Tischer, P.C.S.; Vitorino, J.C.; Sierakowski, Maria-R.; Tischer, C.A. 2009-01-01 Bacterial cellulose (BC), biosynthesized by Acetobacter xylinum, was produced in a medium consisting of rice bark pre-treated with an enzymatic pool. Rice bark was evaluated as a carbon source by complete enzymatic hydrolysis and monosaccharide composition (GC-MS of derived alditol acetates). It was treated enzymatically and then enriched with glucose up to 4% (w/v). The BC produced by static and aerated processes was purified by immersion in 0.1 M NaOH, was characterized by FT-IR, X-ray diffraction and the biosynthetic nanostructures were evaluated by Scanning Electronic (SEM), Transmission Electronic (TEM) and Atomic Force Microscopy (AFM). The BC films arising from static fermentation with rice bark/glucose and glucose are tightly intertwined, partially crystalline, being type II cellulose produced with rice bark/glucose, and type I to the produced in a glucose medium. The nanostructurated biopolymer obtained from the rice bark/glucose medium, produced in a reactor with air flux had micro- and nanospheres linked to nanofibers of cellulose. These results indicate that the bark components, namely lignins, hemicelluloses or mineral contents, interact with the cellulose forming micro- and nanostructures with potential use to incorporate drugs 4. Some ecological, economic, and social consequences of bark beetle infestations Science.gov (United States) Robert A. Progar; Adris Eglitis; John E. Lundquist 2009-01-01 Bark beetles are powerful agents of change in dynamic forest ecosystems. Most assessments of the effects of bark beetle outbreaks have been based on negative impacts on timber production. The positive effects of bark beetle activities are much less well understood. Bark beetles perform vital functions at all levels of scale in forest ecosystems. At the landscape... 5. Iron sources for citrus rootstock development grown on pine bark/vermiculite mixed substrate Fontes de ferro para o desenvolvimento de porta-enxertos críticos produzidos em substrato de casca de pinus e vermiculita Directory of Open Access Journals (Sweden) Rhuanito Soranz Ferrarezi 2007-10-01 Full Text Available For high technology seedling production systems, nutrition plays an important role, mainly the fertigation with iron chelates to prevent its deficiency. This study had the goal of searching for alternative iron sources with the same nutrient efficiency but lower cost in relation to nutrient solution total cost. An experiment was carried out in 56 cm³-conic-containers tilled with a pine bark/ vermiculite mixed substrate using Fe-DTPA, Fe-EDDHA, Fe-EDDHMA, Fe-EDTA, Fe-HEDTA, FeCl3, FeSO4, FeSO4+citric acid plus a control, and the rootstocks Swingle, Rangpur, Trifoliata and Cleopatra, in a randomized complete block design, with four replicates. Seedlings were evaluated for height, relative chlorophyll index, total and soluble iron leaf concentrations. Cleopatra was the only rootstock observed without visual iron chlorosis symptoms. There was a low relative chlorophyll index for Rangpur, Swingle and Trifoliata rootstocks in the control plots, in agreement with the observed symptoms. High total iron concentrations were found in the control and Fe-EDTA plots, whereas soluble iron represented only a low percent of the total iron. The economical analysis showed the following cost values of iron sources in relation to the nutrient solution total costs: Fe-HEDTA (37.25% > FeCl3 (4.61% > Fe-EDDHMA (4.53% > Fe-EDDHA (3.35% > Fe-DTPA (2.91% > Fe-EDTA (1.08% > FeSO4+citric acid (0.78% > FeSO4 (0.25%. However, only plants from Fe-EDDHA and Fe-EDDHMA treatments did not present any deficiency visual symptoms. The relative cost of Fe-EDDHA application is low, its efficiency in maintaining iron available in solution resulted in high plant heights, making it recommendable for citric rootstock production in nurseries.No sistema altamente especializado de produção de mudas, a nutrição exerce papel importante, principalmente a fertirrigação com quelatos de ferro para evitar sua deficiência. O objetivo deste estudo foi buscar fontes alternativas de ferro que 6. Isolation, identification and antagonistic activity evaluation of actinomycetes in barks of nine trees Directory of Open Access Journals (Sweden) Wang Dong-sheng 2017-01-01 Full Text Available Actinomycetes are important producers of novel bioactive compounds. New sources need to be explored for isolating previously unknown bioactive compound-producing actinomycetes. Here we evaluated the potential of bark as a natural source of novel bioactive actinomycete species. Bark samples were collected from nine tree species at different elevations (1600-3400 ma.s.l. on Qin Mountain, Shaanxi Province, China. Actinomycetes were cultivated, enumerated and isolated using serial dilution and spread-plate techniques. The antimicrobial activity of actinomycete isolates was analyzed using an agar block method against 15 typical bacterial and fungal species and plant pathogens. The dominant isolates were identified by 16S rRNA-based sequence analysis. Results showed that actinomycete counts in bark samples of Quercus liaotungensis Koidz. was the highest among all trees species tested. The numbers of actinomycete species in bark samples were highest in Q. aliena var. acutiserrata and Spiraea alpina Pall. Antagonistic activity wasdetected in approximately 54% of the actinomycete isolates. Of these, 20 isolates (25% showed broad-spectrum antagonistic activity against ≥5 of the microorganisms tested. In conclusion, the bark on coniferous and broadleaf trees possesses a high diversity of actinomycetes and serves as a natural source of bioactive compound-producing actinomycetes. 7. Modelling biomechanics of bark patterning in grasstrees. Science.gov (United States) Dale, Holly; Runions, Adam; Hobill, David; Prusinkiewicz, Przemyslaw 2014-09-01 Bark patterns are a visually important characteristic of trees, typically attributed to fractures occurring during secondary growth of the trunk and branches. An understanding of bark pattern formation has been hampered by insufficient information regarding the biomechanical properties of bark and the corresponding difficulties in faithfully modelling bark fractures using continuum mechanics. This study focuses on the genus Xanthorrhoea (grasstrees), which have an unusual bark-like structure composed of distinct leaf bases connected by sticky resin. Due to its discrete character, this structure is well suited for computational studies. A dynamic computational model of grasstree development was created. The model captures both the phyllotactic pattern of leaf bases during primary growth and the changes in the trunk's width during secondary growth. A biomechanical representation based on a system of masses connected by springs is used for the surface of the trunk, permitting the emergence of fractures during secondary growth to be simulated. The resulting fracture patterns were analysed statistically and compared with images of real trees. The model reproduces key features of grasstree bark patterns, including their variability, spanning elongated and reticulate forms. The patterns produced by the model have the same statistical character as those seen in real trees. The model was able to support the general hypothesis that the patterns observed in the grasstree bark-like layer may be explained in terms of mechanical fractures driven by secondary growth. Although the generality of the results is limited by the unusual structure of grasstree bark, it supports the hypothesis that bark pattern formation is primarily a biomechanical phenomenon. 8. Pheromone biosynthesis in bark beetles. Science.gov (United States) Tittiger, Claus; Blomquist, Gary J 2017-12-01 Pine bark beetles rely on aggregation pheromones to coordinate mass attacks and thus reproduce in host trees. The structural similarity between many pheromone components and those of defensive tree resin led to early suggestions that pheromone components are metabolic derivatives of ingested precursors. This model has given way to our current understanding that most pheromone components are synthesized de novo. Their synthesis involves enzymes that modify products from endogenous metabolic pathways; some of these enzymes have been identified and characterized. Pheromone production is regulated in a complex way involving multiple signals, including JH III. This brief review summarizes progress in our understanding of this highly specialized metabolic process. Copyright © 2017 Elsevier Inc. All rights reserved. 9. Hydrological properties of bark of selected forest tree species. Part 2: Interspecific variability of bark water storage capacity Directory of Open Access Journals (Sweden) Ilek Anna 2017-06-01 Full Text Available The subject of the present research is the water storage capacity of bark of seven forest tree species: Pinus sylvestris L., Larix decidua Mill., Abies alba Mill., Pinus sylvestris L., Quercus robur L., Betula pendula Ehrh. and Fagus sylvatica L. The aim of the research is to demonstrate differences in the formation of bark water storage capacity between species and to identify factors influencing the hydrological properties of bark. The maximum water storage capacity of bark was determined under laboratory conditions by performing a series of experiments simulating rainfall and by immersing bark samples in containers filled with water. After each single experiment, the bark samples were subjected to gravity filtration in a desiccator partially filled with water. The experiments lasted from 1084 to 1389 hours, depending on the bark sample. In all the studied species, bark sampled from the thinnest trees is characterized by the highest water storage capacity expressed in mm H2O · cm-3, while bark sampled from the thickest trees - by the lowest capacity. On the other hand, bark sampled from the thickest trees is characterized by the highest water storage capacity expressed in H2O · cm-2 whereas bark from the thinnest trees - by the lowest capacity. In most species tested, as the tree thickness and thus the bark thickness and the coefficient of development of the interception surface of bark increase, the sorption properties of the bark decrease with bark depth, and the main role in water retention is played by the outer bark surface. The bark of European beech is an exception because of the smallest degree of surface development and because the dominant process is the absorption of water. When examining the hydrological properties of bark and calculating its parameters, one needs to take into account the actual surface of the bark of trees. Disregarding the actual bark surface may lead to significant errors in the interpretation of research 10. IDENTIFICATION AND CHARACTERIZATION OF FIVE NON- TRADITIONAL SOURCE CATEGORIES: CATASTROPHIC/ACCIDENTAL RELEASES, VEHICLE REPAIR FACILITIES, RECYCLING, PESTICIDE APPLICATION, AND AGRICULTURAL OPERATIONS Science.gov (United States) The report gives results of work that is part of EPA's program to identify and characterize emissions sources not currently accounted for by either the existing Aerometric Information Retrieval System (AIRS) or State Implementation Plan (SIP) area source methodologies and to deve... 11. Categories from scratch NARCIS (Netherlands) Poss, R. 2014-01-01 The concept of category from mathematics happens to be useful to computer programmers in many ways. Unfortunately, all "good" explanations of categories so far have been designed by mathematicians, or at least theoreticians with a strong background in mathematics, and this makes categories 12. Testing applicability of black poplar (Populus nigra L.) bark to heavy metal air pollution monitoring in urban and industrial regions International Nuclear Information System (INIS) Berlizov, A.N.; Blum, O.B.; Filby, R.H.; Malyuk, I.A.; Tryshyn, V.V. 2007-01-01 A comparative study of the capabilities of black poplar-tree (Populus nigra L.) bark as a biomonitor of atmospheric heavy-metal pollution is reported. Performance indicators (concentrations and enrichment factors) of heavy metal bioaccumulation of bark were compared to the corresponding indicators of epiphytic lichens Xanthoria parietina (L.) Th. Fr. and Physcia adscendens (Fr.) H. Oliver, collected simultaneously with bark samples within the Kiev urban-industrial conurbation. The concentrations of 40 minor and trace elements in the samples were measured by a combination of epithermal and instrumental neutron activation analysis (NAA) using a 10 MW nuclear research reactor WWR-M as the neutron source. Statistical analysis of the data was carried out using non-parametric tests. It was shown that for the majority of the elements determined a good correlation exists between their concentrations in bark and in the lichen species. The accumulation capability of the bark was found to be as effective as, and in some cases better, for both types of lichens. Based on the background levels and variations of the elemental concentration in black poplar-tree bark, threshold values for the enrichment factors were established. For a number of elements (As, Au, Ce, Co, Cr, Cu, La, Mn, Mo, Ni, Sb, Sm, Ti, Th, U, V, W) an interspecies calibration was performed. An optimized pre-irradiation treatment of the bark sample was employed which efficiently separated the most informative external layer from the deeper layers of the bark and thus minimized variations of the element concentrations. Results of this study support black poplar-tree bark as an alternative to epiphytic lichens for heavy metal air pollution monitoring in urban and industrial regions, where severe environmental conditions may result in scarcity or even lack of the indicator species 13. Retrospective determination of 137Cs specific activity distribution in spruce bark and bark aggregated transfer factor in forests on the scale of the Czech Republic ten years after the Chernobyl accident. Science.gov (United States) Suchara, I; Rulík, P; Hůlka, J; Pilátová, H 2011-04-15 The (137)Cs specific activities (mean 32Bq kg(-1)) were determined in spruce bark samples that had been collected at 192 sampling plots throughout the Czech Republic in 1995, and were related to the sampling year. The (137)Cs specific activities in spruce bark correlated significantly with the (137)Cs depositions in areas affected by different precipitation sums operating at the time of the Chernobyl fallout in 1986. The ratio of the (137)Cs specific activities in bark and of the (137)Cs deposition levels yielded bark aggregated transfer factor T(ag) about 10.5×10(-3)m(-2)kg(-1). Taking into account the residual specific activities of (137)Cs in bark 20Bq kg(-1) and the available pre-Chernobyl data on the (137)Cs deposition loads on the soil surface in the Czech Republic, the real aggregated transfer factor after and before the Chernobyl fallout proved to be T*(ag)=3.3×10(-3)m(-2)kg(-1) and T**(ag)=4.0×10(-3)m(-2)kg(-1), respectively. The aggregated transfer factors T*(ag) for (137)Cs and spruce bark did not differ significantly in areas unequally affected by the (137)Cs fallout in the Czech Republic in 1986, and the figures for these aggregated transfer factors were very similar to the mean bark T(ag) values published from the extensively affected areas near Chernobyl. The magnitude of the (137)Cs aggregated transfer factors for spruce bark for the pre-Chernobyl and post-Chernobyl period in the Czech Republic was also very similar. The variability in spruce bark acidity caused by the operation of local anthropogenic air pollution sources did not significantly influence the accumulation and retention of (137)Cs in spruce bark. Increasing elevation of the bark sampling plots had a significant effect on raising the remaining (137)Cs specific activities in bark in areas affected by precipitation at the time when the plumes crossed, because the sums of this precipitation increased with elevation (covariable). Copyright © 2011 Elsevier B.V. All rights reserved. 14. Tests of CP Violation with $\\bar{K^0}$ and $K^{0}$ at LEAR CERN Multimedia 2002-01-01 % PS195 Tests of CP Violation with &bar.K$^0$ and K$^0$ at LEAR \\\\ \\\\The aim of the experiment is to carry out precision tests of CP, T and CPT on the neutral kaon system through $K ^0 -$\\bar{K}^0 $interferometry using LEAR as an intense source. A beam of$ ~10^{6}~\\bar{p}$~events/second is brought to rest in a hydrogen target producing$ K ^0 $and$ $\\bar{K}^0$ events through the reaction channels : \\\\ \\\\ \\begin{center} $\\bar{p}p~~~~~\\rightarrow~~~~K^0~+~(K^-\\pi^+$) \\\\ \\\\~~~~~~~~$\\rightarrow~~~~\\bar{K}^0~+~(K^+\\pi^-$) \\end{center}\\\\ \\\\The neutral strange particles and their antiparticles are tagged by detecting in the magnetic field the sign of the accompanying charged kaons identified via Cerenkovs and scintillators. The experiment has the unique feature that the decays from particles and antiparticles are recorded under the same operating conditions using tracking chambers and a gas sampling electromagnetic calorimeter. The measured time-dependent $K ^0$-\\bar{K}^0 asymmetries for non-lepton... 15. A study on temporal variation of elemental composition in tree barks used as air pollution indicators Energy Technology Data Exchange (ETDEWEB) Santos, Eliane C.; Saiki, Mitiko, E-mail: [email protected], E-mail: [email protected] [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil) 2015-07-01 The study of air pollution using biological matrices has shown that tree barks may be used as biomonitor due to accumulation of aerosol particles on its porous surface. The bark elemental composition can provide information on pollution sources as well as characterize the aerial pollutants from a wide geographical region. The aim of this study was to investigate the variation in elemental composition in barks with time of exposure. Tree barks from Tipuana (Tipuana tipu) and Sibipiruna (Caesalpinia peltophoroides) species were collected in February 2013 and July 2014 in the city of São Paulo. For analysis, the barks were cleaned, grated, ground and analyzed by neutron activation analysis (NAA). Aliquots of samples and synthetic standards of elements were irradiated with thermal neutron flux at the IEA-R1 nuclear research reactor and after a suitable decay time, the induced gamma activities were analyzed by gamma spectrometry. The elements As, Br, Ca, Co, Cr, Cs, Fe, K, La, Rb, Sb, Sc and Zn were determined and the results indicated variability in the concentrations depending on the element, sampling period and also on tree species, indicating that there are not very well defined temporal trends. The quality control of the analytical results evaluated by analyzing INCT Virginia Tobacco Leaves certified reference material (CRM) presented values of |z-score| < 2, indicating that the procedure of NAA applied is suitable for the analyses. (author) 16. Analysis of tree bark samples for air pollution biomonitoring of an urban area International Nuclear Information System (INIS) Martins, Ana Paula G.; Negri, Elnara M.; Saldiva, Paulo H.N. 2009-01-01 Air pollution is receiving much attention as a public health problem around the world due to its adverse health effects from exposures by urban populations. Within this context, the use of vegetal biomonitoring to evaluate air quality has been investigated throughout the world. Air pollutant levels are high in the city of Sao Paulo, SP, Brazil and being the vehicle emissions its main source. The aim of this study was to evaluate concentrations of As, Ba, Br, Ca, Co, Cr, Cu, Fe, Mn, Pb, S, Sb and Zn in tree bark samples used as biomonitor of urban air pollution. Concentrations of these elements were determined in barks collected in trees of the Ibirapuera Park, one of the biggest and most visited parks of the city of Sao Paulo city. Samples of tree barks were also collected in a site outside the city of Sao Paulo, in a rural area of Embu-Guacu, considered as a control site. The element concentrations were determined by the methods of Instrumental Neutron Activation Analysis (INAA) and of Energy Dispersive X-ray Fluorescence Spectrometry (EDXRF). The findings of this study showed that tree bark samples may be used as biomonitors of urban air pollution in a micro scale, and both techniques, INAA and EDXRF, can be used to evaluate element concentrations in tree bark samples. (author) 17. Trial production of fuel pellet from Acacia mangium bark waste biomass Science.gov (United States) Amirta, R.; Anwar, T.; Sudrajat; Yuliansyah; Suwinarti, W. 2018-04-01 Fuel pellet is one of the innovation products that can be produced from various sources of biomass such as agricultural residues, forestry and also wood industries including wood bark. Herein this paper, the potential fuel pellet production using Acacia mangium bark that abundant wasted from chip mill industry was studied. Fuel pellet was produced using a modified animal feed pellet press machine equipped with rotating roller-cylinders. The international standards quality of fuel pellet such as ONORM (Austria), SS (Sweden), DIN (Germany), EN (European) and ITEBE (Italy) were used to evaluate the optimum composition of feedstock and additive used. Theresults showed the quality offuel pellet produced were good compared to commercial sawdust pellet. Mixed of Acacia bark (dust) with 10% of tapioca and 20% of glycerol (w/w) was increased the stable form of pellet and the highest heating value to reached 4,383 Kcal/kg (calorific value). Blending of Acacia bark with tapioca and glycerol was positively improved its physical, chemical and combustion properties to met the international standards requirement for export market. Based on this finding, production of fuel pellet from Acacia bark waste biomass was promising to be developed as an alternative substitution of fossil energy in the future. 18. A study on temporal variation of elemental composition in tree barks used as air pollution indicators International Nuclear Information System (INIS) Santos, Eliane C.; Saiki, Mitiko 2015-01-01 The study of air pollution using biological matrices has shown that tree barks may be used as biomonitor due to accumulation of aerosol particles on its porous surface. The bark elemental composition can provide information on pollution sources as well as characterize the aerial pollutants from a wide geographical region. The aim of this study was to investigate the variation in elemental composition in barks with time of exposure. Tree barks from Tipuana (Tipuana tipu) and Sibipiruna (Caesalpinia peltophoroides) species were collected in February 2013 and July 2014 in the city of São Paulo. For analysis, the barks were cleaned, grated, ground and analyzed by neutron activation analysis (NAA). Aliquots of samples and synthetic standards of elements were irradiated with thermal neutron flux at the IEA-R1 nuclear research reactor and after a suitable decay time, the induced gamma activities were analyzed by gamma spectrometry. The elements As, Br, Ca, Co, Cr, Cs, Fe, K, La, Rb, Sb, Sc and Zn were determined and the results indicated variability in the concentrations depending on the element, sampling period and also on tree species, indicating that there are not very well defined temporal trends. The quality control of the analytical results evaluated by analyzing INCT Virginia Tobacco Leaves certified reference material (CRM) presented values of |z-score| < 2, indicating that the procedure of NAA applied is suitable for the analyses. (author) 19. Quantum non-barking dogs International Nuclear Information System (INIS) Imari Walker, Sara; Davies, Paul C W; Samantray, Prasant; Aharonov, Yakir 2014-01-01 Quantum weak measurements with states both pre- and post-selected offer a window into a hitherto neglected sector of quantum mechanics. A class of such systems involves time dependent evolution with transitions possible. In this paper we explore two very simple systems in this class. The first is a toy model representing the decay of an excited atom. The second is the tunneling of a particle through a barrier. The post-selection criteria are chosen as follows: at the final time, the atom remains in its initial excited state for the first example and the particle remains behind the barrier for the second. We then ask what weak values are predicted in the physical environment of the atom (to which no net energy has been transferred) and in the region beyond the barrier (to which the particle has not tunneled). Thus, just as the dog that didn't bark in Arthur Conan Doyle's story Silver Blaze gave Sherlock Holmes meaningful information about the dog's non-canine environment, here we probe whether the particle that has not decayed or has not tunneled can provide measurable information about physical changes in the environment. Previous work suggests that very large weak values might arise in these regions for long durations between pre- and post-selection times. Our calculations reveal some distinct differences between the two model systems. (paper) 20. Activity of Oligoresveratrols from Stem Bark of Hopea mengarawan (Dipterocarpaceae as Hydroxyl Radical Scavenger Directory of Open Access Journals (Sweden) SRI ATUN 2006-06-01 Full Text Available Four oligoresveratrols ranging from dimer to tetramer, isolated from stem bark of Hopea mengarawan (Dipterocarpaceae plants were tested for their activity as hydroxyl radical scavenger. The activity of these compounds was evaluated against the 2-deoxyribose degradation induced by the hydroxyl radical generated via a Fenton-type reaction. Result showed that balanocarpol, heimiol A, vaticanol G, and vaticanol B had IC50 3.83; 15.44; 2.01; and 4.71 µM, respectively. These results suggest that oligoresveratrols from stem bark of H. mengarawan maybe useful as potential sources of natural antioxidants. 1. In vitro evaluation of antioxidant activity of Cordia dichotoma (Forst f.) bark. Science.gov (United States) Nariya, Pankaj B; Bhalodia, Nayan R; Shukla, Vinay J; Acharya, Rabinarayan; Nariya, Mukesh B 2013-01-01 Cordia dichotoma Forst. f. bark, identified as botanical source of Shleshmataka in Ayurvedic pharmacopoeia. Present investigation was undertaken to evaluate possible antioxidant potential of methanolic and butanol extract of C. dichotoma bark. In vitro antioxidant activity of methanolic and butanol extract was determined by 1,1, diphenyl-2, picrylhydrazyl (DPPH) free radical scavenging assay. The extracts were also evaluated for their phenolic contents and antioxidant activity. Phenolic content was measured using Folin-Ciocalteu reagent and was calculated as Gallic acid equivalents. Antiradical activity of methanolic extract was measured by DPPH assay and was compared to ascorbic acid and ferric reducing power of the extract was evaluated by Oyaizu method. In the present study three in vitro models were used to evaluate antioxidant activity. The first two methods were for direct measurement of radical scavenging activity and remaining one method evaluated the reducing power. The present study revealed that the C. dichotoma bark has significant radical scavenging activity. 2. Blocking in Category Learning OpenAIRE Bott, Lewis; Hoffman, Aaron B.; Murphy, Gregory L. 2007-01-01 Many theories of category learning assume that learning is driven by a need to minimize classification error. When there is no classification error, therefore, learning of individual features should be negligible. We tested this hypothesis by conducting three category learning experiments adapted from an associative learning blocking paradigm. Contrary to an error-driven account of learning, participants learned a wide range of information when they learned about categories, and blocking effe... 3. Category I structures program International Nuclear Information System (INIS) Endebrock, E.G.; Dove, R.C. 1981-01-01 The objective of the Category I Structure Program is to supply experimental and analytical information needed to assess the structural capacity of Category I structures (excluding the reactor cntainment building). Because the shear wall is a principal element of a Category I structure, and because relatively little experimental information is available on the shear walls, it was selected as the test element for the experimental program. The large load capacities of shear walls in Category I structures dictates that the experimental tests be conducted on small size shear wall structures that incorporates the general construction details and characteristics of as-built shear walls 4. Categories and logical syntax NARCIS (Netherlands) Klev, Ansten Morch 2014-01-01 The notions of category and type are here studied through the lens of logical syntax: Aristotle's as well as Kant's categories through the traditional form of proposition S is P', and modern doctrines of type through the Fregean form of proposition F(a)', function applied to argument. Topics 5. Computing color categories NARCIS (Netherlands) Yendrikhovskij, S.N.; Rogowitz, B.E.; Pappas, T.N. 2000-01-01 This paper is an attempt to develop a coherent framework for understanding, modeling, and computing color categories. The main assumption is that the structure of color category systems originates from the statistical structure of the perceived color environment. This environment can be modeled as 6. Characterisation of Sorbus domestica L. Bark, Fruits and Seeds: Nutrient Composition and Antioxidant Activity Directory of Open Access Journals (Sweden) Boris Majić 2015-01-01 Full Text Available The aim of this work is to assess the nutritional value of service tree (Sorbus domestica L. bark, fruit exocarp and mesocarp, and seeds by establishing the levels of macro- and microelements, total phenolics, flavonoids and tannins. Our results revealed that all of the tested service tree samples were rich in potassium. Bark was the best source of calcium and zinc, while seeds were the best source of magnesium. Compared to the bark and seeds, fruit exocarp and mesocarp contained significantly lower amounts of these three elements. Immature exocarp and bark contained the highest amounts of total phenolics and showed the highest antioxidant activity. Maturation significantly decreased the amount of total phenolics in fruits, as well as the antioxidant activity of total phenolics and total tannins from exocarp, but not from mesocarp. Exocarp was the richest in total flavonoids. Based on the obtained data, we have concluded that the under-utilised species S. domestica L. could serve as an important source of mineral elements and antioxidants in the human diet. 7. Gastric antiulcer and antiinflammatory activities of Calotropis procera stem bark Directory of Open Access Journals (Sweden) Nagesh S. Tour 2011-09-01 Full Text Available In recent years, a widespread search has been launched to identify new antiinflammatory and antiulcer-drugs from natural sources. The study was aimed at evaluating the antiinflammatory and antiulcer activity of chloroform extract (CH and hydroalcoholic extract (HE of the stem bark of Calotropis procera (Aiton W.T. Aiton, Apocynaceae, obtained successively by cold maceration. The antiinflammatory effect of the CH and HE extracts of the stem bark of the C. procera against carrageenan-induced paw oedema and also its antiulcer activity by using two acute models: Aspirin (100 mg/kg, p.o. and ethanol (96%, 1 mL/200 g in albino rats have been studied and found to be significant at 200 and 400 mg/kg when compared to the standard drugs. As a part of investigations to obtain compounds with antiinflammatory and antiulcer activity in this work, a bioassay was carried out with fractions obtained from chloroform extract with n-hexane (NF1, 1-butanol (BF1, ethyl acetate (EF1 and chloroform (CF1. The hydroalcoholic extract (HE of the stem bark was fractionated with n-hexane (NF2, 1-butanol (BF2, ethyl acetate (EF2, chloroform (CF2 and water (WF2. The fractions were freeze-dried and evaluated for its antiinflammatory and antiulcer activity. Fractions NF1, CF1, BF2 and EF2 (20 mg/kg showed significant antiinflammatory and antiulcer activity. The results obtained for antiulcer activity were also supported well by the histopathological examination of the open excised rat stomach. Further experiments are underway to determine which phytoconstituents are involved in antiinflammatory and antiulcer activities as well as mechanisms involved in gastroprotection. 8. Triangulated categories (AM-148) CERN Document Server Neeman, Amnon 2014-01-01 The first two chapters of this book offer a modern, self-contained exposition of the elementary theory of triangulated categories and their quotients. The simple, elegant presentation of these known results makes these chapters eminently suitable as a text for graduate students. The remainder of the book is devoted to new research, providing, among other material, some remarkable improvements on Brown''s classical representability theorem. In addition, the author introduces a class of triangulated categories""--the ""well generated triangulated categories""--and studies their properties. This 9. Photosynthetic bark: use of chlorophyll absorption continuum index to estimate Boswellia papyrifera bark chlorophyll content NARCIS (Netherlands) Girma, A.; Skidmore, A.K.; Bie, de C.A.J.M.; Bongers, F.; Schlerf, M. 2013-01-01 Quantification of chlorophyll content provides useful insight into the physiological performance of plants. Several leaf chlorophyll estimation techniques, using hyperspectral instruments, are available. However, to our knowledge, a non-destructive bark chlorophyll estimation technique is not 10. Photosynthetic bark : use of chlorophyll absorption continuum index to estimate Boswellia papyrifera bark chlorophyll content NARCIS (Netherlands) Girma Gebrekidan, A.; Skidmore, A.K.; de Bie, C.A.J.M.; Bongers, Frans; Schlerf, Martin; Schlerf, M. 2013-01-01 Quantification of chlorophyll content provides useful insight into the physiological performance of plants. Several leaf chlorophyll estimation techniques, using hyperspectral instruments, are available. However, to our knowledge, a non-destructive bark chlorophyll estimation technique is not 11. Analysis of rare categories CERN Document Server He, Jingrui 2012-01-01 This book focuses on rare category analysis where the majority classes have smooth distributions and the minority classes exhibit the compactness property. It focuses on challenging cases where the support regions of the majority and minority classes overlap. 12. Consumer Product Category Database Data.gov (United States) U.S. Environmental Protection Agency — The Chemical and Product Categories database (CPCat) catalogs the use of over 40,000 chemicals and their presence in different consumer products. The chemical use... 13. Alternative solutions for the bio-denitrification of landfill leachates using pine bark and compost. Science.gov (United States) Trois, Cristina; Pisano, Giulia; Oxarango, Laurent 2010-06-15 Nitrified leachate may still require an additional bio-denitrification step, which occurs with the addition of often-expensive chemicals as carbon source. This study explores the applicability of low-cost carbon sources such as garden refuse compost and pine bark for the denitrification of high strength landfill leachates. The overall objective is to assess efficiency, kinetics and performance of the substrates in the removal of high nitrate concentrations. Garden refuse and pine bark are currently disposed of in general waste landfills in South Africa, separated from the main waste stream. A secondary objective is to assess the feasibility of re-using green waste as by-product of an integrated waste management system. Denitrification processes in fixed bed reactors were simulated at laboratory scale using anaerobic batch tests and leaching columns packed with immature compost and pine bark. Biologically treated leachate from a Sequencing Batch Reactor (SBR) with nitrate concentrations of 350, 700 and 1100 mgN/l were used for the trials. Preliminary results suggest that, passed the acclimatization step (40 days for both substrates), full denitrification is achieved in 10-20 days for the pine bark and 30-40 days for the compost. Copyright 2010 Elsevier B.V. All rights reserved. 14. Alternative solutions for the bio-denitrification of landfill leachates using pine bark and compost International Nuclear Information System (INIS) Trois, Cristina; Pisano, Giulia; Oxarango, Laurent 2010-01-01 Nitrified leachate may still require an additional bio-denitrification step, which occurs with the addition of often-expensive chemicals as carbon source. This study explores the applicability of low-cost carbon sources such as garden refuse compost and pine bark for the denitrification of high strength landfill leachates. The overall objective is to assess efficiency, kinetics and performance of the substrates in the removal of high nitrate concentrations. Garden refuse and pine bark are currently disposed of in general waste landfills in South Africa, separated from the main waste stream. A secondary objective is to assess the feasibility of re-using green waste as by-product of an integrated waste management system. Denitrification processes in fixed bed reactors were simulated at laboratory scale using anaerobic batch tests and leaching columns packed with immature compost and pine bark. Biologically treated leachate from a Sequencing Batch Reactor (SBR) with nitrate concentrations of 350, 700 and 1100 mgN/l were used for the trials. Preliminary results suggest that, passed the acclimatization step (40 days for both substrates), full denitrification is achieved in 10-20 days for the pine bark and 30-40 days for the compost. 15. Heavy metals in bark of Pinus massoniana (Lamb.) as an indicator of atmospheric deposition near a smeltery at Qujiang, China. Science.gov (United States) Kuang, Yuan Wen; Zhou, Guo Yi; Da Wen, Zhi; Liu, Shi Zhong 2007-06-01 below their background values, except for Cd and Co. Levels of the metals, in particular Pb and Zn, in the soils beneath the sample trees at Qujiang were higher than those at Dinghushan with statistical significance. The result suggested that the pine forest soils at Qujiang had a great input of heavy metals from wet and dry atmospheric deposition, with the Pb-Zn smeltery most probably being the source. Levels of Cu, Fe, Mn, Zn, Ni and Pb at Qujiang, both in the inner and the outer bark, were statistically higher than those at Dinghushan. Higher concentrations of Pb, Fe, Zn and Cu may come from the stem-flow of elements leached from the canopy, soil splash on the 1.5 m height and sorption of metals in the mosses and lichens growing on the bark, which were direct or indirect results from the atmospheric deposition. Levels of heavy metals in the outer barks were associated well with the metal concentrations in the soil, reflecting the close relationships between the metal atmospheric deposition and their accumulation in the outer bark of Masson pine. The significant (pgreat at Qujiang, based on the levels in the bark of Pinus massoniana and on the concentrations in the soils beneath the trees compared with that at Dinghushan. Bark of Pinus massoniana, especially the outer bark, was an indicator of metal loading at least at the time of sampling. The results from this study and the techniques employed constituted a new contribution to the development of biogeochemical methods for environmental monitoring particularly in areas with high frequency of pollution in China. The method would be of value for follow up studies aimed at the assessment of industrial pollution in other areas similar with the Pearl River Delta. 16. Product Category Management Issues OpenAIRE Żukowska, Joanna 2011-01-01 The purpose of the paper is to present the issues related to category management. It includes the overview of category management definitions and the correct process of exercising it. Moreover, attention is paid to the advantages of brand management, the benefits the supplier and retailer may obtain in this way. The risk element related to this topics is also presented herein. Joanna Żukowska 17. SYNERGISTIC ANTIBACTERIAL EFFECT OF STEM BARK ... African Journals Online (AJOL) userpc ABSTRACT. The study was aimed at screening the stem bark extracts of Faidherbia albida and Psidium guajava for synergistic antibacterial effect against methicillin resistant Staphylococcus aureus (MRSA). The powdered plant materials were extracted with methanol using cold maceration technique and the extracts were ... 18. Bark beetle responses to vegetation management practices Science.gov (United States) Joel D. McMillin; Christopher J. Fettig 2009-01-01 Native tree-killing bark beetles (Coleoptera: Curculionidae, Scolytinae) are a natural component of forest ecosystems. Eradication is neither possible nor desirable and periodic outbreaks will occur as long as susceptible forests and favorable climatic conditions co-exist. Recent changes in forest structure and tree composition by natural processes and management... 19. A dynamical model for bark beetle outbreaks Czech Academy of Sciences Publication Activity Database Křivan, Vlastimil; Lewis, M.; Bentz, B. J.; Bewick, S.; Lenhart, S. M.; Liebhold, A. 2016-01-01 Roč. 407, OCT 21 (2016), s. 25-37 ISSN 0022-5193 Institutional support: RVO:60077344 Keywords : bistability * bark beetle * Dendroctonus ponderosae Subject RIV: EH - Ecology, Behaviour Impact factor: 2.113, year: 2016 http://www.sciencedirect.com/science/article/pii/S0022519316301928 20. Pharmaceutical and nutraceutical effects of Pinus pinaster bark extract Science.gov (United States) Iravani, S.; Zolfaghari, B. 2011-01-01 In everyday life, our body generates free radicals and other reactive oxygen species which are derived either from the endogenous metabolic processes (within the body) or from external sources. Many clinical and pharmacological studies suggest that natural antioxidants can prevent oxidative damage. Among the natural antioxidant products, Pycnogenol® (French Pinus pinaster bark extract) has been received considerable attention because of its strong free radical-scavenging activity against reactive oxygen and nitrogen species. P. pinaster bark extract (PBE) contains polyphenolic compounds (these compounds consist of catechin, taxifolin, procyanidins of various chain lengths formed by catechin and epicatechin units, and phenolic acids) capable of producing diverse potentially protective effects against chronic and degenerative diseases. This herbal medication has been reported to have cardiovascular benefits, such as vasorelaxant activity, angiotensin-converting enzyme inhibiting activity, and the ability to enhance the microcirculation by increasing capillary permeability. Moreover, effects on the immune system and modulation of nitrogen monoxide metabolism have been reported. This article provides a brief overview of clinical studies describing the beneficial and health-promoting effects of PBE. PMID:22049273 1. Models as Relational Categories Science.gov (United States) Kokkonen, Tommi 2017-11-01 Model-based learning (MBL) has an established position within science education. It has been found to enhance conceptual understanding and provide a way for engaging students in authentic scientific activity. Despite ample research, few studies have examined the cognitive processes regarding learning scientific concepts within MBL. On the other hand, recent research within cognitive science has examined the learning of so-called relational categories. Relational categories are categories whose membership is determined on the basis of the common relational structure. In this theoretical paper, I argue that viewing models as relational categories provides a well-motivated cognitive basis for MBL. I discuss the different roles of models and modeling within MBL (using ready-made models, constructive modeling, and generative modeling) and discern the related cognitive aspects brought forward by the reinterpretation of models as relational categories. I will argue that relational knowledge is vital in learning novel models and in the transfer of learning. Moreover, relational knowledge underlies the coherent, hierarchical knowledge of experts. Lastly, I will examine how the format of external representations may affect the learning of models and the relevant relations. The nature of the learning mechanisms underlying students' mental representations of models is an interesting open question to be examined. Furthermore, the ways in which the expert-like knowledge develops and how to best support it is in need of more research. The discussion and conceptualization of models as relational categories allows discerning students' mental representations of models in terms of evolving relational structures in greater detail than previously done. 2. Antioxidant defences of Norway spruce bark against bark beetles and its associated blue-stain fungus Directory of Open Access Journals (Sweden) Felicijan Mateja 2015-12-01 Full Text Available Bark beetles and their fungal associates are integral parts of forest ecosystems, the European spruce bark beetle (Ips typographus Linnaeus, 1758 and the associated pathogenic blue stain fungus Ceratocystis polonica (SIEM. C. MOREAU, are the most devastating pests regarding Norway spruce [Picea abies (L. H. KARST.]. Bark beetles commonly inhabit weakened and felled trees as well as vital trees. They cause physiological disorders in trees by destroying a phloem and cambium or interrupt the transpiration -ow in the xylem. Conifers have a wide range of effective defence mechanisms that are based on the inner bark anatomy and physiological state of the tree. The basic function of bark defences is to protect the nutrient-and energy-rich phloem, the vital meristematic region of the vascular cambium, and the transpiration -ow in the sapwood. The main area of defence mechanisms is secondary phloem, which is physically and chemically protected by polyphenolic parenchyma (PP cells, sclerenchyma, calcium oxalate crystals and resin ducts. Conifer trunk pest resistance includes constitutive, inducible defences and acquired resistance. Both constitutive and inducible defences may deter beetle invasion, impede fungal growth and close entrance wounds. During a successful attack, systemic acquired resistance (SAR becomes effective and represents a third defence strategy. It gradually develops throughout the plant and provides a systemic change within the whole tree’s metabolism, which is maintained over a longer period of time. The broad range of defence mechanisms that contribute to the activation and utilisation of SAR, includes antioxidants and antioxidant enzymes, which are generally linked to the actions of reactive oxygen species (ROS. The presented review discusses the current knowledge on the antioxidant defence strategies of spruce inner bark against the bark beetle (Ips typographus and associated blue stain fungus (Ceratocystis polonica. 3. Categories of transactions International Nuclear Information System (INIS) Anon. 1991-01-01 This chapter discusses the types of wholesale sales made by utilities. The Federal Energy Regulatory Commission (FERC), which regulates inter-utility sales, divides these sales into two broad categories: requirements and coordination. A variety of wholesale sales do not fall neatly into either category. For example, power purchased to replace the Three Mile Island outage is in a sense a reliability purchase, since it is bought on a long-term firm basis to meet basic load requirements. However, it does not fit the traditional model of a sale considered as part of each utility's long range planning. In addition, this chapter discusses transmission services, with a particular emphasis on wheeling 4. Aleppo pine bark as a biomonitor of atmospheric pollution in the arid environment of Jordan Energy Technology Data Exchange (ETDEWEB) Al-Alawi, Mu' taz M.; Jiries, Anwar [Prince Faisal Center for Dead Sea, Environmental and Energy Research, Mu' tah University, Al-Karak (Jordan); Carreras, Hebe [University of Cordoba, FCEFyN, Cordoba (Argentina); Alawi, Mahmoud [Chemistry Department, University of Jordan, Amman (Jordan); Charlesworth, Susanne M. [Geography, Environment and Disaster Management, Coventry University, Coventry (United Kingdom); Batarseh, Mufeed I. 2007-11-15 Monitoring of atmospheric pollution using Aleppo bark as a bioindicator was carried out in the industrial area surrounding the Al-Hussein thermal power station and the oil refinery at Al-Hashimyeh town, Jordan. The concentrations of heavy metals (copper, lead, cadmium, manganese, cobalt, nickel, zinc, iron, and chromium) were analyzed in bark samples collected from the study area during July 2004. The results showed that high levels of heavy metals were found in tree bark samples retrieved from all studied sites compared with the remote reference site. This is, essentially, due to the fact that the oil refinery and the thermal power plant still use low-quality fuel oil from the by-products of oil refining. Automobile emissions are another source of pollution since the study area is located along a major heavy-traffic highway. It was found that the area around the study sites (Al-Hashimyeh town, Zarqa) is polluted with high levels of heavy metals. Pine bark was found to be a suitable bioindicator of aerial fallout of heavy metals in arid regions. (Abstract Copyright [2007], Wiley Periodicals, Inc.) 5. Bioactivity-guided isolation of antioxidant triterpenoids from Betula platyphylla var. japonica bark. Science.gov (United States) Eom, Hee Jeong; Kang, Hee Rae; Kim, Ho Kyong; Jung, Eun Bee; Park, Hyun Bong; Kang, Ki Sung; Kim, Ki Hyun 2016-06-01 The bark of Betula platyphylla var. japonica (Betulaceae) has been used to treat pneumonia, choloplania, nephritis, and chronic bronchitis. This study aimed to investigate the bioactive chemical constituents of the bark of B. platyphylla var. japonica. A bioassay-guided fractionation and chemical investigation of the bark of B. platyphylla var. japonica resulted in the isolation and identification of a new lupane-type triterpene, 27-hydroxybetunolic acid (1), along with 18 known triterpenoids (2-19). The structure of the new compound (1) was elucidated on the basis of 1D and 2D NMR spectroscopic data analysis as well as HR-ESIMS. Among the known compounds, chilianthin B (17), chilianthin C (18), and chilianthin A (19) were triterpene-lignan esters, which are rarely found in nature. Compounds 4, 6, 7, 17, 18, and 19 showed significant antioxidant activities with IC50 values in the range 4.48-43.02μM in a DPPH radical-scavenging assay. However, no compound showed significant inhibition of acetylcholine esterase (AChE). Unfortunately, the new compound (1) exhibited no significance in both biological activities. This study strongly suggests that B. platyphylla var. japonica bark is a potential source of natural antioxidants for use in pharmaceuticals and functional foods. Copyright © 2016 Elsevier Inc. All rights reserved. 6. Alkaloids of root barks of Zanthoxylum spp International Nuclear Information System (INIS) Hohlemwerger, Sandra Virginia Alves; Sales, Edijane Matos; Costa, Rafael dos Santos; Velozo, Eudes da Silva; Guedes, Maria Lenise da Silva 2012-01-01 In 1959, Gottlieb and Antonaccio published a study reporting the occurrence of lignan sesamin and triterpene lupeol in Zanthoxylum tingoassuiba. In this work we describe the phytochemical study of the root bark of the Z. tingoassuiba which allowed the identification of the lupeol, sesamin, and alkaloids dihydrochelerythrine, chelerythrine, anorttianamide, cis-N-methyl-canadin, predicentine, 2, 3-methylenedioxy-10,11-dimethoxy-tetrahydro protoberberine. The investigation of hexane and methanol extracts of the root bark of Z. rhoifolium and Z. stelligerum also investigated showed the presence of alkaloids dihydrochelerythrine, anorttianamide, cis-N-methyl-canadine, 7,9-dimethoxy-2,3- methylenedioxybenzophen anthridine and angoline. The occurrence of 2,3-methylenedioxy-10,11-dimethoxy-tetrahydro protoberberine is first described in Z. tingoassuiba and Z. stelligerum. This is also the first report of the presence of hesperidin and neohesperidin in roots of Z. stelligerum (author) 7. The water holding capacity of bark in Danish angiosperm trees DEFF Research Database (Denmark) Larsen, Hanne Marie Ellegård; Rasmussen, Hanne Nina; Nord-Larsen, Thomas The water holding capacity of bark in seven Danish angiosperm trees was examined. The aim of the study was (1) to examine height trends and (2) bark thickness trends in relation to the water holding capacity and (3) to determine interspecific differences. The wet-weight and dry-weight of a total...... number of 427 bark samples were measured. The water holding capacity was calculated as the difference between wet-weight and dry-weight per wet-weight. The water holding capacity increased with elevation in most tree species and contrary to the expectation, thinner bark generally had a higher water...... holding capacity. Differences in the water holding capacity of bark may influence the occurrence and distribution of a wide range of bark-living organisms including the distribution of corticolous lichens.... 8. TANNIN CONTENT DETERMINATION IN THE BARK OF Eucalyptus spp Directory of Open Access Journals (Sweden) Paulo Fernando Trugilho 2003-07-01 Full Text Available The objective of this study was to determine the tannin contents in the bark oftwenty-five species of Eucalyptus through two extraction methods, one using hot water andthe other a sequence of toluene and ethanol. The results showed that the extraction methodspresented significant differences in the tannin contents. The method using the sequencetoluene and ethanol, for most of the species, promoted a larger extraction of tannin. The hotwater method presented higher contents of tannin for Eucalyptus cloeziana (40,31%,Eucalyptus melanophoia (20,49% and Eucalyptus paniculata (16,03%. In the toluene andethanol method the species with higher tannin content was Eucalyptus cloeziana (31,00%,Eucalyptus tereticornis (22,83% and Eucalyptus paniculata (17,64%. The Eucalyptuscloeziana presented great potential as commercial source of tannin, independent of theextraction method considered. 9. Investigations on bark extracts of Picea abies Energy Technology Data Exchange (ETDEWEB) Weissmann, G 1981-01-01 Successive extraction of the bark with solvents of increasing polarity yielded about 60% of soluble material. The alcohol and water extracts contained principally simple polyphenols and their glycosides, tannins, mono-and disaccharides, soluble hemicelluloses and pectins. Hot water extracts are suitable for production of adhesives by reaction with formaldehyde, but their polyphenol content is only 50%. The polyphenols and their glycosides, and glucosides of hydroxystilbenes, were investigated in detail. 10. Phytochemical analysis of Pinus eldarica bark Science.gov (United States) Iravani, S.; Zolfaghari, B. 2014-01-01 Bark extract of Pinus pinaster contains numerous phenolic compounds such as catechins, taxifolin, and phenolic acids. These compounds have received considerable attentions because of their anti-inflammatory, antimutagenic, anticarcinogenic, antimetastatic and high antioxidant activities. Although P. pinaster bark has been intensely investigated in the past; there is comparably less information available in the literature in regard to P. eldarica bark. Therefore, the aim of this study was to determine the chemical composition of P. eldarica commonly found in Iran. A reversed-phase high pressure liquid chromatography (RP-HPLC) method for the determination of catechin, caffeic acid, ferulic acid, and taxifolin in P. pinaster and P. eldarica was developed. A mixture of 0.1% formic acid in deionized water and 0.1% formic acid in acetonitrile was used as the mobile phase, and chromatographic separation was achieved on a Nova pack C18 at 280 nm. The two studied Pinus species contained high amounts of polyphenolic compounds. Among four marker compounds, the main substances identified in P. pinaster and P. eldarica were taxifolin and catechin, respectively. Furthermore, the composition of the bark oil of P. eldarica obtained by hydrodistillation was analyzed by gas chromatography/mass spectroscopy (GC/MS). Thirty-three compounds accounting for 95.1 % of the oil were identified. The oils consisted mainly of mono- and sesquiterpenoid fractions, especially α-pinene (24.6%), caryophyllene oxide (14.0%), δ-3-carene (10.7%), (E)-β-caryophyllene (7.9%), and myrtenal (3.1%). PMID:25657795 11. The “febrifuge principle” of cinchona barks OpenAIRE Carreira, Teresa; Lopes, Sandra; Maia, Elisa 2007-01-01 The antipyretic properties of cinchona barks were known since ancient times in South America, particularly in Peru. The use of these barks in medicines against “fevers” in Europe in the 17th century made the exploitation of cinchonas of Peru a highly productive process, and those cinchona trees became menaced. The Portuguese government aware of the problem searched an alternative in cinchona varieties existing in Brazil. By the beginning of 19th century, samples of different Brazilian barks w... 12. Bark-peeling, food stress and tree spirits - the use of pine inner bark for food in Scandinavia and North America Science.gov (United States) Lars Ostlund; Lisa Ahlberg; Olle Zackrisson; Ingela Bergman; Steve Arno 2009-01-01 The Sami people of northern Scandinavia and many indigenous peoples of North America have used pine (Pinus spp.) inner bark for food, medicine and other purposes. This study compares bark-peeling and subsequent uses of pine inner bark in Scandinavia and western North America, focusing on traditional practices. Pine inner bark contains substances - mainly carbohydrates... 13. Basic category theory CERN Document Server Leinster, Tom 2014-01-01 At the heart of this short introduction to category theory is the idea of a universal property, important throughout mathematics. After an introductory chapter giving the basic definitions, separate chapters explain three ways of expressing universal properties: via adjoint functors, representable functors, and limits. A final chapter ties all three together. The book is suitable for use in courses or for independent study. Assuming relatively little mathematical background, it is ideal for beginning graduate students or advanced undergraduates learning category theory for the first time. For each new categorical concept, a generous supply of examples is provided, taken from different parts of mathematics. At points where the leap in abstraction is particularly great (such as the Yoneda lemma), the reader will find careful and extensive explanations. Copious exercises are included. 14. CHURCH, Category, and Speciation Directory of Open Access Journals (Sweden) Rinderknecht Jakob Karl 2018-01-01 Full Text Available The Roman Catholic definition of “church”, especially as applied to groups of Protestant Christians, creates a number of well-known difficulties. The similarly complex category, “species,” provides a model for applying this term so as to neither lose the centrality of certain examples nor draw a hard boundary to rule out border cases. In this way, it can help us to more adequately apply the complex ecclesiology of the Second Vatican Council. This article draws parallels between the understanding of speciation and categorization and the definition of Church since the council. In doing so, it applies the work of cognitive linguists, including George Lakoff, Zoltan Kovecses, Giles Fauconnier and Mark Turner on categorization. We tend to think of categories as containers into which we sort objects according to essential criteria. However, categories are actually built inductively by making associations between objects. This means that natural categories, including species, are more porous than we assume, but nevertheless bear real meaning about the natural world. Taxonomists dispute the border between “zebras” and “wild asses,” but this distinction arises out of genetic and evolutionary reality; it is not merely arbitrary. Genetic descriptions of species has also led recently to the conviction that there are four species of giraffe, not one. This engagement will ground a vantage point from which the Council‘s complex ecclesiology can be more easily described so as to authentically integrate its noncompetitive vision vis-a-vis other Christians with its sense of the unique place held by Catholic Church. 15. Visual memory needs categories OpenAIRE Olsson, Henrik; Poom, Leo 2005-01-01 Capacity limitations in the way humans store and process information in working memory have been extensively studied, and several memory systems have been distinguished. In line with previous capacity estimates for verbal memory and memory for spatial information, recent studies suggest that it is possible to retain up to four objects in visual working memory. The objects used have typically been categorically different colors and shapes. Because knowledge about categories is stored in long-t... 16. Libertarianism & Category-Mistake OpenAIRE Carlos G. Patarroyo G. 2009-01-01 This paper offers a defense against two accusations according to which libertarianism incurs in a category-mistake. The philosophy of Gilbert Ryle will be used to explain the reasons which ground these accusations. Further, it will be shown why, although certain sorts of libertarianism based on agent-causation or Cartesian dualism incur in these mistakes, there is at least one version of libertarianism to which this criticism does not necessarily apply: the version that seeks to find in physi... 17. Convergence semigroup categories Directory of Open Access Journals (Sweden) Gary Richardson 2013-09-01 Full Text Available Properties of the category consisting of all objects of the form (X, S, λ are investigated, where X is a convergence space, S is a commutative semigroup, and λ: X × S → X is a continuous action. A “generalized quotient” of each object is defined without making the usual assumption that for each fixed g ∈ S, λ(., g : X → X is an injection. 18. Categories and Commutative Algebra CERN Document Server Salmon, P 2011-01-01 L. Badescu: Sur certaines singularites des varietes algebriques.- D.A. Buchsbaum: Homological and commutative algebra.- S. Greco: Anelli Henseliani.- C. Lair: Morphismes et structures algebriques.- B.A. Mitchell: Introduction to category theory and homological algebra.- R. Rivet: Anneaux de series formelles et anneaux henseliens.- P. Salmon: Applicazioni della K-teoria all'algebra commutativa.- M. Tierney: Axiomatic sheaf theory: some constructions and applications.- C.B. Winters: An elementary lecture on algebraic spaces. 19. Chemical, Antioxidant and Antimicrobial Investigations of Pinus cembra L. Bark and Needles Directory of Open Access Journals (Sweden) Anca Miron 2011-09-01 Full Text Available The chemical constituents and biological activity of Pinus cembra L. (Pinaceae, native to the Central European Alps and the Carpathian Mountains, are not well known. The aim of the present work was to examine the phenolic content, antioxidant and antimicrobial effects of hydromethanolic extracts of Pinus cembra L. bark and needles. Bark extract had higher concentrations of total phenolics (299.3 vs. 78.22 mg gallic acid equivalents/g extract, flavonoids (125.3 vs. 19.84 mg catechin equivalents/g extract and proanthocyanidins (74.3 vs. 12.7 mg cyanidin equivalents/g extract than needle extract and was more active as a free radical scavenger, reducing agent and antimicrobial agent. The EC50 values in the 2,2-diphenyl-1-picrylhydrazyl (DPPH, 2,2'-azino-bis(3-ethylbenzo-thiazoline-6-sulfonic acid diammonium salt (ABTS and reducing power assays were 71.1, 6.3 and 26 mg/mL for bark extract and 186.1, 24 and 104 mg/mL for needle extract, respectively. In addition, needle extract showed ferrous ions chelating effects (EC50 = 1,755 μg/mL. The antimicrobial effects against Staphylococcus aureus, Sarcina lutea, Bacillus cereus, Escherichia coli, Pseudomonas aeruginosa and Candida albicans were assessed by the agar diffusion method. Both extracts (4 mg/well were active against all the microorganisms tested; bark extract showed higher inhibition on all strains. These results indicate that Pinus cembra L. bark and needles are good sources of phytochemicals with antioxidant and antimicrobial activities. 20. Direct delignification of untreated bark chips with mixed cultures of bacteria. [Bacillus and Cellulomonas strains Energy Technology Data Exchange (ETDEWEB) Deschamps, A M; Gillie, J P; Lebeault, J M 1981-01-01 Delignification of pine bark chips was observed after about 35 days when they were the sole carbon source in mixed liquid cultures of cellulolytic and lignin degrading strains of Bacillus and Cellulomonas. No delignification was observed in pure cultures. Free tannins liberated from the chips were also degraded in most of the cultures. The necessity of combining a cellulolytic and lignin degrading bacterial strain to obtain delignification is discussed. (Refs. 25). 1. Antimicrobial potential of Dialium guineense (Wild.) stem bark on some clinical isolates in Nigeria OpenAIRE Olajubu, FA; Akpan, I; Ojo, DA; Oluwalana, SA 2012-01-01 Context: The persistent increase in the number of antibiotic-resistant strains of microorganisms has led to the development of more potent but also more expensive antibiotics. In most developing countries of the world these antibiotics are not readily affordable, thus making compliance difficult. This calls for research into alternative sources of antimicrobials. Dialium guineense is a shrub of the family Leguminosae. Its stem bark is used for the treatment of cough, toothache, and bronchitis... 2. Self-organizing feature map (neural networks) as a tool to select the best indicator of road traffic pollution (soil, leaves or bark of Robinia pseudoacacia L.). Science.gov (United States) Samecka-Cymerman, A; Stankiewicz, A; Kolon, K; Kempers, A J 2009-07-01 Concentrations of the elements Cd, Co, Cr, Cu, Fe, Mn, Ni, Pb and Zn were measured in the leaves and bark of Robinia pseudoacacia and the soil in which it grew, in the town of Oleśnica (SW Poland) and at a control site. We selected this town because emission from motor vehicles is practically the only source of air pollution, and it seemed interesting to evaluate its influence on soil and plants. The self-organizing feature map (SOFM) yielded distinct groups of soils and R. pseudoacacia leaves and bark, depending on traffic intensity. Only the map classifying bark samples identified an additional group of highly polluted sites along the main highway from Wrocław to Warszawa. The bark of R. pseudoacacia seems to be a better bioindicator of long-term cumulative traffic pollution in the investigated area, while leaves are good indicators of short-term seasonal accumulation trends. 3. The biomedical significance of the phytochemical, proximate and mineral compositions of the leaf, stem bark and root of Jatropha curcas Directory of Open Access Journals (Sweden) Atamgba Agbor Asuk 2015-08-01 Conclusions: The outcome of this study suggests that the leaf, stem bark and root of J. curcas have very good medicinal potentials, meet the standard requirements for drug formulation and serve as good sources of energy and nutrients except for the presence of some anti-nutritional elements predominant in the leaf. 4. LIBERTARISMO & ERROR CATEGORIAL Directory of Open Access Journals (Sweden) Carlos G. Patarroyo G. 2009-01-01 Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos. 5. Libertarianism & Category-Mistake Directory of Open Access Journals (Sweden) Carlos G. Patarroyo G. 2009-12-01 Full Text Available This paper offers a defense against two accusations according to which libertarianism incurs in a category-mistake. The philosophy of Gilbert Ryle will be used to explain the reasons which ground these accusations. Further, it will be shown why, although certain sorts of libertarianism based on agent-causation or Cartesian dualism incur in these mistakes, there is at least one version of libertarianism to which this criticism does not necessarily apply: the version that seeks to find in physical indeterminism the grounding of human free will. 6. Libertarismo & Error Categorial OpenAIRE PATARROYO G, CARLOS G 2009-01-01 En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibili... 7. Do bark beetles and wood borers infest lumber following heat treatment? The role of bark Science.gov (United States) Robert A. Haack; Toby R. Petrice; Pascal Nzokou 2007-01-01 Wood packing material (WPM) is an important pathway for the movement of bark- and wood-infesting insects (Haack 2006). New international standards for treating WPM, often referred to as "ISPM 15," were adopted in 2002 (FAO 2002). The two approved WPM treatments are heat treatment (56? C core temperature for 30 min) and fumigation with methyl bromide. These... 8. Enhancement of Human Cheek Skin Texture by Acacia Nilotica Bark ... African Journals Online (AJOL) HP Purpose: To evaluate the effect of a topical application of a cream formulation containing extract of. Acacia nilotica bark extract on human cheek skin texture. Methods: A cream containing 3 % concentrated extract of Acacia nilotica bark was developed by entrapping the extract in the internal aqueous phase of the cream ... 9. Book review of advances in insect physiology: pine bark beetles Science.gov (United States) If not the most destructive forest pest, bark beetles are probably a close second in their culpability for killing millions of trees in the Northern Hemisphere. This volume provides an aptly-timed interdisciplinary review on aspects of bark beetle physiology, especially how it relates to selecting, ... 10. Ecological interactions of bark beetles with host trees Science.gov (United States) Certain species of bark beetles in the insect order Coleoptera, family Curculionidae (formerly Scolytidae) are keystone species in forest ecosystems. However, the tree-killing and woodboring bark and ambrosia beetles are also among the most damaging insects of forest products including lumber, paper... 11. Bark beetle outbreaks in western North America: Causes and consequences Science.gov (United States) Bentz, Barbara; Logan, Jesse; MacMahon, James A.; Allen, Craig D.; Ayres, Matt; Berg, Edward E; Carroll, Allan; Hansen, Matt; Hicke, Jeff H.; Joyce, Linda A.; Macfarlane, Wallace; Munson, Steve; Negron, Jose; Paine, Tim; Powell, Jim; Raffa, Kenneth; Regniere, Jacques; Reid, Mary; Romme, Bill; Seybold, Steven J.; Six, Diana; Vandygriff, Jim; Veblen, Tom; White, Mike; Witcosky, Jeff; Wood, David J. A. 2005-01-01 Since 1990, native bark beetles have killed billions of trees across millions of acres of forest from Alaska to northern Mexico. Although bark beetle infestations are a regular force of natural change in forested ecosystems, several of the current outbreaks, which are occurring simultaneously across western North America, are the largest and most severe in recorded history. 12. Larvicidal effects of leaf, bark and nutshell of Anacardium ... African Journals Online (AJOL) Comparative analysis of the larvicidal properties of aqueous extracts of leaves, bark and nutshell of Anacardium occidentale L. (Cashew) were evaluated on the larvae of Anopheles gambiae. Three concentrations of 10/100ml, 20/100ml and 30/100ml each of leaf, bark and nutshell were prepared in three replicates. 13. Antimicrobial and phytochemical analysis of leaves and bark ... African Journals Online (AJOL) While quarter strength (5 g/ml) concentrations of the bark methanol and ethanol extracts were the MICs against Staphylococcus aureus and Micrococcus luteus. The phytochemical analysis carried out on B. ferruginea leaves and bark detected the presence of alkaloids, flavonoids, tannin, cardiac glycosides, anthraquinone, ... 14. Influence of predators and parisitoids on bark beetle productivity Science.gov (United States) Jan Weslien 1991-01-01 In an earlier field experiment, natural enemies of the bark beetle, Ips typographus (L) were estimated to have reduced bark beetle productivity by more than 80 percent. To test this hypothesis, spruce logs (Picea abies) were placed in the forest in the spring, prior to commencement of flight by I. typographus.... 15. Genetic control of wood density and bark thickness, and their ... African Journals Online (AJOL) Tree diameter under and over bark at breast height (dbh), wood density and bark thickness were assessed on samples from control-pollinated families of Eucalyptus grandis, E. urophylla, E. grandis × E. urophylla and E. urophylla × E. grandis. The material was planted in field trials in the coastal Zululand region of South ... 16. Mapping aerial metal deposition in metropolitan areas from tree bark: a case study in Sheffield, England. Science.gov (United States) Schelle, E; Rawlins, B G; Lark, R M; Webster, R; Staton, I; McLeod, C W 2008-09-01 We investigated the use of metals accumulated on tree bark for mapping their deposition across metropolitan Sheffield by sampling 642 trees of three common species. Mean concentrations of metals were generally an order of magnitude greater than in samples from a remote uncontaminated site. We found trivially small differences among tree species with respect to metal concentrations on bark, and in subsequent statistical analyses did not discriminate between them. We mapped the concentrations of As, Cd and Ni by lognormal universal kriging using parameters estimated by residual maximum likelihood (REML). The concentrations of Ni and Cd were greatest close to a large steel works, their probable source, and declined markedly within 500 m of it and from there more gradually over several kilometres. Arsenic was much more evenly distributed, probably as a result of locally mined coal burned in domestic fires for many years. Tree bark seems to integrate airborne pollution over time, and our findings show that sampling and analysing it are cost-effective means of mapping and identifying sources. 17. Beyond the Categories. Science.gov (United States) Weeks, Jeffrey 2015-07-01 Shushu is a Turkish Cypriot drag performance artist and the article begins with a discussion of a short film about him by a Greek Cypriot playwright, film maker, and gay activist. The film is interesting in its own right as a documentary about a complex personality, but it is also relevant to wider discussion of sexual and gender identity and categorization in a country divided by history, religion, politics, and military occupation. Shushu rejects easy identification as gay or transgender, or anything else. He is his own self. But refusing a recognized and recognizable identity brings problems, and I detected a pervasive mood of melancholy in his portrayal. The article builds from this starting point to explore the problematic nature of identities and categorizations in the contemporary world. The analysis opens with the power of words and language in defining and classifying sexuality. The early sexologists set in motion a whole catalogue of categories which continue to shape sexual thinking, believing that they were providing a scientific basis for a more humane treatment of sexual variations. This logic continues in DSM-5. The historical effect, however, has been more complex. Categorizations have often fixed individuals into a narrow band of definitions and identities that marginalize and pathologize. The emergence of radical sexual-social movements from the late 1960s offered new forms of grassroots knowledge in opposition to the sexological tradition, but at first these movements worked to affirm rather than challenge the significance of identity categories. Increasingly, however, identities have been problematized and challenged for limiting sexual and gender possibilities, leading to the apparently paradoxical situation where sexual identities are seen as both necessary and impossible. There are emotional costs both in affirming a fixed identity and in rejecting one. Shushu is caught in this dilemma, leading to the pervasive sense of loss that shapes the 18. Comparison of protein profiles of beech bark disease-resistant or beech bark disease-susceptible American beech Science.gov (United States) Mary E. Mason; Marek Krasowski; Judy Loo; Jennifer. Koch 2011-01-01 Proteomic analysis of beech bark proteins from trees resistant and susceptible to beech bark disease (BBD) was conducted. Sixteen trees from eight geographically isolated stands, 10 resistant (healthy) and 6 susceptible (diseased/infested) trees, were studied. The genetic complexity of the sample unit, the sampling across a wide geographic area, and the complexity of... 19. The Hidden History of a Famous Drug: Tracing the Medical and Public Acculturation of Peruvian Bark in Early Modern Western Europe (c. 1650-1720). Science.gov (United States) Klein, Wouter; Pieters, Toine 2016-10-01 The history of the introduction of exotic therapeutic drugs in early modern Europe is usually rife with legend and obscurity and Peruvian bark is a case in point. The famous antimalarial drug entered the European medical market around 1640, yet it took decades before the bark was firmly established in pharmaceutical practice. This article argues that the history of Peruvian bark can only be understood as the interplay of its trajectories in science, commerce, and society. Modern research has mostly focused on the first of these, largely due to the abundance of medico-historical data. While appreciating these findings, this article proposes to integrate the medical trajectory in a richer narrative, by drawing particular attention to the acculturation of the bark in commerce and society. Although the evidence we have for these two trajectories is still sketchy and disproportionate, it can nevertheless help us to make sense of sources that have not yet been an obvious focus of research. Starting from an apparently isolated occurrence of the drug in a letter, this article focuses on Paris as the location where medical and public appreciation of the bark took shape, by exploring several contexts of knowledge circulation and medical practice there. These contexts provide a new window on the early circulation of knowledge of the bark, at a time when its eventual acceptance was by no means certain. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: [email protected]. 20. Evaluating a humane alternative to the bark collar: Automated differential reinforcement of not barking in a home-alone setting. Science.gov (United States) Protopopova, Alexandra; Kisten, Dmitri; Wynne, Clive 2016-12-01 The aim of this study was to develop a humane alternative to the traditional remote devices that deliver punishers contingent on home-alone dog barking. Specifically, we evaluated the use of remote delivery of food contingent on intervals of not barking during the pet owner's absence. In Experiment 1, 5 dogs with a history of home-alone nuisance barking were recruited. Using an ABAB reversal design, we demonstrated that contingent remote delivery of food decreased home-alone barking for 3 of the dogs. In Experiment 2, we demonstrated that it is possible to thin the differential-reinforcement-of-other-behavior (DRO) schedule gradually, resulting in a potentially more acceptable treatment. Our results benefit the dog training community by providing a humane tool to combat nuisance barking. © 2016 Society for the Experimental Analysis of Behavior. 1. Trophic habits of mesostigmatid mites associated with bark beetles in Mexico Science.gov (United States) M. Patricia Chaires-Grijalva; Edith G. Estrada-Venegas; Armando Equihua-Martinez; John C. Moser; Stacy R. Blomquist 2016-01-01 Samples of bark and logs damaged by bark beetles were collected from 16 states of Mexico from 2007 to 2012. Fifteen bark beetle species were found within the bark and log samples and were examined for phoretic mites and arthropod associates. Thirty-three species of mesostigmatid mites were discovered within the samples. They were identified in several trophic guilds... 2. Thickness and roughness measurements for air-dried longleaf pine bark Science.gov (United States) Thomas L. Eberhardt 2015-01-01 Bark thicknesses for longleaf pine (Pinus palustris Mill.) were investigated using disks collected from trees harvested on a 70-year-old plantation. Maximum inner bark thickness was relatively constant along the tree bole whereas maximum outer bark thickness showed a definite decrease from the base of the tree to the top. The minimum whole bark thickness followed the... 3. Categories for Observing Language Arts Instruction (COLAI). Science.gov (United States) Benterud, Julianna G. Designed to study individual use of time spent in reading during regularly scheduled language arts instruction in a natural classroom setting, this coding sheet consists of nine categories: (1) engagement, (2) area of language arts, (3) instructional setting, (4) partner (teacher or pupil(s)), (5) source of content, (6) type of unit, (7) assigned… 4. Language categories in Russian morphology OpenAIRE زهرایی زهرایی 2009-01-01 When studying Russian morphology, one can distinguish two categories. These categories are “grammatical” and “lexico-grammatical”. Grammatical categories can be specified through a series of grammatical features of words. Considering different criteria, Russian grammarians and linguists divide grammatical categories of their language into different types. In determining lexico-grammatical types, in addition to a series of grammatical features, they also consider a series of lexico-semantic fe... 5. NUTRIENT CONTENT IN DURIAN (DURIO ZIBETHINUS L. BRANCH BARK Directory of Open Access Journals (Sweden) Jaime A. TEIXEIRA DA SILVA 2017-12-01 Full Text Available Durian (Durio zibethinus L. fruit form on the bark of branches. The aim of our research was to assess whether branches bearing different number of fruits have different nutrient contents in their bark. We determined the nitrogen (N, phosphorous (P, potassium (K, and carbon (C content in branch bark 30 days after fruit set using branches bearing different number of fruits per panicle (0, 1, 2 or >2 of two varieties (‘Otong’ and ‘Kani’. Bark was cut into 0.03 m long and 0.005 m wide segments with an average thickness of 0.00085 m. The bark of branches bearing a different number of fruits had the same N, P, K, and C content but different ratios of C/N, C/P, C/K, N/K, and P/K. The bark of ‘Otong’ branches had a higher N content but a lower C/N ratio than ‘Kani’ bark. 6. How can bark from landings and mills be used Energy Technology Data Exchange (ETDEWEB) Ostalski, R 1983-01-01 The use of bark (mainly Scots pine) as an organic fertilizer and for soil amelioration is explored. A typical analysis of three month old bark is given and methods for composting with solid fertilizers and slurry described. Stacks 3m long by 1m wide and up to 2m height are used with fertilizer (NPK at 2:1.2:1.2 kg/cubic m of bark) added between layers of bark approximately 25 cm deep. Poultry manure or cow/horse/pig manure can be used at up to 10% to 30% respectively of compost volume, and the amount of N fertilizer reduced by up to three quarters depending on the type and quantity of manure. Stacks are turned 2-3 times and used after twelve months. Liquid slurry is best applied to larger stacks every 2-3 days for one month, and then left for 2 and a half to 3 months. Composted bark can be used in young plantations - especially on degraded sites - at rates in the region of 400-800 cubic m/ha, depending on soil type etc. Bark can also be used without composting on some sites, but is best ground first and should be weathered to oxidize the tannins. Composted bark is also used as a mulch on field scale vegetables, generally at 200-400 cubic m/ha. 7. Strategies towards sustainable bark sourcing as raw material for ... African Journals Online (AJOL) SARAH 2017-07-31 Jul 31, 2017 ... plant-based drug development: a case study on Garcinia lucida tree species ... and Pharmaceutical Chemistry, Faculty of Medicine and Biomedical ... Standard Deviation values were higher, suggesting that each tree had its ... 8. Response of exposed bark and exposed lichen to an urban area Energy Technology Data Exchange (ETDEWEB) Cruz, A.M.J. [Polytechnic Institute of Coimbra, Oliveira do Hospital (Portugal). Oliveira do Hospital College of Technology and Management; Freitas, M.C.; Canha, N. [URSN, Sacavem (Portugal). Inst. Tecnologico e Nuclear (ITN); Verburg, T.G.; Wolterbeek, H.T. [Technical Univ. of Delft (Netherlands). Dept. of Radiation, Radionuclides and Reactors 2011-07-01 The aim of this study is to understand emission sources of chemical elements using biomonitoring as a tool. The selected lichen and bark were respectively Parmotrema bangii and Criptomeria japonica, sampled in the pollution-free atmosphere of Azores (Sao Miguel island), Portugal, and were exposed in the courtyards of 22 basic schools of Lisbon. The exposure was from January to May 2008 and from June to October 2008 (designated through the text as winter and summer respectively). The chemical element concentrations were determined by INAA. Conductivity of the lichen samples was measured. Factor analysis (MCTTFA) was applied to winter/summer bark/lichen exposed datasets. Arsenic emission sources, soil with anthropogenic contamination, a Se source, traffic, industry, and a sea contribution, were identified. In lichens, a physiological source based on the conductivity values was found. The spatial study showed contribution of sources to specific school positioning. Conductivity values were high in summer in locations as international Lisbon airport and downtown. Lisbon is spatially influenced by marine air mass transportation. It is concluded that one air sampler in Lisbon might be enough to define the emission sources under which they are influenced. (orig.) 9. Characterization and Antioxidant Properties of the Condensed Tannins from Alaska Cedar Inner Bark Directory of Open Access Journals (Sweden) Martha Rosales-Castro 2014-05-01 Full Text Available The structure and antioxidant activity of condensed tannins isolated from Alaska Cedar inner bark have been investigated. Oligomers of flavan-3-ol were purified by column chromatography (Sephadex LH-20 and analyzed by 13CNMR and MALDI-TOF MS spectrometries. Their antioxidant activities were measured using 1,1’-diphenyl-2-picrylhydrazyl (DPPH, 2,2-azino-bis-3-ethylbenzothiazoline-6-sulfonic acid (ABTS free radicals scavenging, ferric reducing/antioxidant power (FRAP, and β-carotene-linoleic acid model system (β-CLAMS assays. Results showed that the condensed tannins consents of both homogeneous and heterogeneous oligomers of procyanidins (catechin/epicatechin and prodelphinidins (gallocatechin/ epigallocatechin flavan-3-ol units; and oligomers from trimmers to heptamers with dominant interflavan linkages B-type as it is most common in proanthocyanidins. Condensed tannins showed significant ntioxidant activity as the median inhibition capacity IC 50 is comparable to the catechin control response. Alaska Cedar inner bark oligomers show high antioxidant capacity, evaluated by both methods based on electron transfer mechanisms and hydrogen atom transfer reactions. This bark may be considered as a new source of natural antioxidants for nutraceutical ingredients. 10. The Study of Interactions between Active Compounds of Coffee and Willow (Salix sp. Bark Water Extract Directory of Open Access Journals (Sweden) Agata Durak 2014-01-01 Full Text Available Coffee and willow are known as valuable sources of biologically active phytochemicals such as chlorogenic acid, caffeine, and salicin. The aim of the study was to determine the interactions between the active compounds contained in water extracts from coffee and bark of willow (Salix purpurea and Salix myrsinifolia. Raw materials and their mixtures were characterized by multidirectional antioxidant activities; however, bioactive constituents interacted with each other. Synergism was observed for ability of inhibition of lipid peroxidation and reducing power, whereas compounds able to scavenge ABTS radical cation acted antagonistically. Additionally, phytochemicals from willow bark possessed hydrophilic character and thermostability which justifies their potential use as an ingredient in coffee beverages. Proposed mixtures may be used in the prophylaxis or treatment of some civilization diseases linked with oxidative stress. Most importantly, strong synergism observed for phytochemicals able to prevent lipids against oxidation may suggest protective effect for cell membrane phospholipids. Obtained results indicate that extracts from bark tested Salix genotypes as an ingredient in coffee beverages can provide health promoting benefits to the consumers; however, this issue requires further study. 11. Comparative use of lichens, mosses and tree bark to evaluate nitrogen deposition in Germany International Nuclear Information System (INIS) Boltersdorf, Stefanie H.; Pesch, Roland; Werner, Willy 2014-01-01 To compare three biomonitoring techniques for assessing nitrogen (N) pollution in Germany, 326 lichen, 153 moss and 187 bark samples were collected from 16 sites of the national N deposition monitoring network. The analysed ranges of N content of all investigated biomonitors (0.32%–4.69%) and the detected δ 15 N values (−15.2‰–1.5‰), made it possible to reveal species specific spatial patterns of N concentrations in biota to indicate atmospheric N deposition in Germany. The comparison with measured and modelled N deposition data shows that particularly lichens are able to reflect the local N deposition originating from agriculture. - Highlights: • We investigated N pollution with the help of bioindicators in Germany. • The N load was monitored with lichens, mosses and bark by tissue N content. • Main source of N pollution was revealed by tissue δ 15 N values. • Particularly the N content and δ 15 N in lichens reflected agriculture-related N deposition. - First nationwide comparison of lichens, mosses and tree bark to assess the N deposition in Germany by analysing N content and δ 15 N values 12. Antimicrobial and antifungal activities of Cordia dichotoma (Forster F.) bark extracts. Science.gov (United States) Nariya, Pankaj B; Bhalodia, Nayan R; Shukla, V J; Acharya, R N 2011-10-01 Cordia dichotoma Forst.f. bark, identified as botanical source of Shlesmataka in Ayurvedic pharmacopoeias. Present study was carried out with an objective to investigate the antibacterial and antifungal potentials of Cordia dichotoma bark. Antibacterial activity of methanol and butanol extracts of the bark was carried out against two gram negative bacteria (Escherichia coli, and Pseudomonas aeruginosa) and two Gram positive bacteria (St. pyogenes and Staphylococcus aureus). The antifungal activity of the extracts was carried out against three common pathogenic fungi (Aspergillus niger, A.clavatus, and Candida albicans). Zone of inhibition of extracts was compared with that of different standards like Amplicilline, Ciprofloxacin, Norfloxacin and Chloramphenicol for antibacterial activity and Nystain and Greseofulvin for antifungal activity. The extracts showed remarkable inhibition of zone of bacterial growth and fungal growth and the results obtained were comparable with that of standards drugs against the organisms tested. The activity of extracts increased linearly with increase in concentration of extract (mg/ml). The results showed the antibacterial and antifungal activity against the organisms tested. 13. Antimicrobial potential of Dialium guineense (Wild.) stem bark on some clinical isolates in Nigeria. Science.gov (United States) Olajubu, Fa; Akpan, I; Ojo, DA; Oluwalana, Sa 2012-01-01 The persistent increase in the number of antibiotic-resistant strains of microorganisms has led to the development of more potent but also more expensive antibiotics. In most developing countries of the world these antibiotics are not readily affordable, thus making compliance difficult. This calls for research into alternative sources of antimicrobials. Dialium guineense is a shrub of the family Leguminosae. Its stem bark is used for the treatment of cough, toothache, and bronchitis. Despite the acclaimed efficacy of D guineense, there is no scientific evidence in its support. This work was carried out to assess the antimicrobial activity of D guineense in vitro against some clinical isolates. D guineense stem bark was collected and 50 gm of air-dried and powdered stem bark of the plant was soaked for 72 hours in 1 l of each of the six solvents used in this study. Each mixture was refluxed, agitated at 200 rpm for 1 hour, filtered using Whatman No. 1 filter paper and, finally, freeze dried. The extracts were then tested for antimicrobial activity using the agar diffusion method. The highest percentage yield of 23.2% was obtained with ethanol. Phytochemical screening showed that D guineense contains anthraquinone, alkaloids, flavonoids, tannins, and saponins. The antimicrobial activity of the extracts revealed a broad spectrum of activity, with Salmonella typhi and Staphylococcus aureusa showing the greatest zones of inhibition (18.0 mm). Only Candida albicans among the fungi tested was inhibited by the extract. The greatest zone of inhibition among the fractions was 16.0 mm. D guineense exhibited bactericidal activity at the 7th and 9th hours against Streptococcus pneumoniae and S. aureus 25923 while the 10th hour against S. typhi and C. albicans. The greatest activity was noted against S pneumoniae, where there was reduced viable cell count after 6 hours of exposure. Stem bark extract of D guineense (Wild.) has the potential to be developed into an antimicrobial 14. MALDI-TOF MS analysis of condensed tannins with potent antioxidant activity from the leaf, stem bark and root bark of Acacia confusa. Science.gov (United States) Wei, Shu-Dong; Zhou, Hai-Chao; Lin, Yi-Ming; Liao, Meng-Meng; Chai, Wei-Ming 2010-06-15 The structures of the condensed tannins from leaf, stem bark and root bark of Acacia confusa were characterized by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) analysis, and their antioxidant activities were measured using 1,1-diphenyl-2-picrylhydrazyl (DPPH) free radical scavenging and ferric reducing/antioxidant power (FRAP) assays. The results showed that the condensed tannins from stem bark and root bark include propelargonidin and procyanidin, and the leaf condensed tannins include propelargonidin, procyanidin and prodelphinidin, all with the procyanidin dominating. The condensed tannins had different polymer chain lengths, varying from trimers to undecamers for leaf and root bark and to dodecamers for stem bark. The condensed tannins extracted from the leaf, stem bark and root bark all showed a very good DPPH radical scavenging activity and ferric reducing power. 15. Vasorelaxant effect of Prunus yedoensis bark Directory of Open Access Journals (Sweden) Lee Kyungjin 2013-02-01 Full Text Available Abstract Background Prunus yedoensis Matsum. is used as traditional medicine—‘Yaeng-Pi’ or ‘Hua-Pi’—in Japan and Korea. However, no studies have examined the pharmacological activities of the P. yedoensis bark. Only the antioxidant and antiviral activities of P. yedoensis fruit and the anti-hyperglycaemic effect of P. yedoensis leaf have been investigated. While studying the antihypertensive effects of several medicinal plants, we found that a methanol extract of P. yedoensis bark (MEPY had distinct vasorelaxant effects on rat aortic rings. Methods The aortic rings were removed from Sprague–Dawley rats and suspended in organ chambers containing 10 ml Krebs-Henseleit solution. The aortic rings were placed between 2 tungsten stirrups and connected to an isometric force transducer. Changes in tension were recorded via isometric transducers connected to a data acquisition system. Results MEPY relaxed the contraction induced by phenylephrine (PE both in endothelium-intact and endothelium-denuded aortic rings concentration dependently. However, the vasorelaxant effects of MEPY on endothelium-denuded aortic rings were lower than endothelium-intact aortic rings. The vasorelaxant effects of MEPY on endothelium-intact aortic rings were reduced by pre-treatment with l-NAME, methylene blue, or ODQ. However, pre-treatment with indomethacin, atropine, glibenclamide, tetraethylammonium, or 4-aminopyridine had no affection. In addition, MEPY inhibited the contraction induced by extracellular Ca2+ in endothelium-denuded rat thoracic aorta rings pre-contracted by PE (1 μM or KCl (60 mM in Ca2+-free solution. Conclusions Our results suggest that MEPY exerts its vasorelaxant effects via the activation of NO formation by means of l-Arg and NO-cGMP pathways and via the blockage of extracellular Ca2+ channels. 16. MALDI-TOF MS Analysis of Condensed Tannins with Potent Antioxidant Activity from the Leaf, Stem Bark and Root Bark of Acacia confusa OpenAIRE Wei; Zhou; Lin; Liao; Chai 2010-01-01 The structures of the condensed tannins from leaf, stem bark and root bark of Acacia confusa were characterized by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) analysis, and their antioxidant activities were measured using 1,1-diphenyl-2-picrylhydrazyl (DPPH) free radical scavenging and ferric reducing/antioxidant power (FRAP) assays. The results showed that the condensed tannins from stem bark and root bark include propelargonidin and procyanidi... 17. Dimethoxyflavone isolated from the stem bark of Stereospermum ... African Journals Online (AJOL) trihydroxy-3/-(8//-acetoxy-7//-methyloctyl)-5, 6-dimethoxyflavone, a flavonoid isolated from the stem bark of Stereospermum kunthianum. The antidiarrhoeal activity was evaluated using rodent models with diarrhoea. The normal intestinal transit, ... 18. Pharmacognostic Evaluation of the Bark of Acacia suma Roxb ... African Journals Online (AJOL) Methods: The macroscopic and microscopic features of the bark were studied, including the ... Conclusion: The findings of this study will facilitate pharmacognostic standardization of the plant ..... EN, Samuelsson G. Inventory of plants used in. 19. Antioxidant benzophenones and xanthones from the root bark of ... African Journals Online (AJOL) Antioxidant benzophenones and xanthones from the root bark of Garcinia smeathmannii. Alain Meli Lannang, Justin Komguem, Fernande Ngounou Ngninzeko, Jean Gustave Tangmouo, David Lontsi, Asma Ajaz, Muhammad Iqbal Choudhary, Beiban Luc Sondengam, Atta -ur-Rahman ... 20. Log bioassay of residual effectiveness of insecticides against bark beetles Science.gov (United States) Richard H. Smith 1982-01-01 Residual effectiveness of nine insecticides applied to bark was tested against western, mountain, and Jeffrey pine beetles. Ponderosa and Jeffrey pine trees were treated and logs cut from them 2 to 13 months later, and bioassayed with the three beetles. The insecticides were sprayed at the rate of 1 gal (3.8 l) per 40- or 80-ft² (3.6 or 7.2 m²) bark surface at varying... 1. sources Directory of Open Access Journals (Sweden) Shu-Yin Chiang 2002-01-01 Full Text Available In this paper, we study the simplified models of the ATM (Asynchronous Transfer Mode multiplexer network with Bernoulli random traffic sources. Based on the model, the performance measures are analyzed by the different output service schemes. 2. Management, morphological, and environmental factors influencing Douglas-fir bark furrows in the Oregon Coast Range Science.gov (United States) Sheridan, Christopher D.; Puettmann, Klaus J.; Huso, Manuela M.P.; Hagar, Joan C.; Falk, Kristen R. 2013-01-01 Many land managers in the Pacific Northwest have the goal of increasing late-successional forest structures. Despite the documented importance of Douglas-fir tree bark structure in forested ecosystems, little is known about factors influencing bark development and how foresters can manage development. This study investigated the relative importance of tree size, growth, environmental factors, and thinning on Douglas-fir bark furrow characteristics in the Oregon Coast Range. Bark furrow depth, area, and bark roughness were measured for Douglas-fir trees in young heavily thinned and unthinned sites and compared to older reference sites. We tested models for relationships between bark furrow response and thinning, tree diameter, diameter growth, and environmental factors. Separately, we compared bark responses measured on trees used by bark-foraging birds with trees with no observed usage. Tree diameter and diameter growth were the most important variables in predicting bark characteristics in young trees. Measured environmental variables were not strongly related to bark characteristics. Bark furrow characteristics in old trees were influenced by tree diameter and surrounding tree densities. Young trees used by bark foragers did not have different bark characteristics than unused trees. Efforts to enhance Douglas-fir bark characteristics should emphasize retention of larger diameter trees' growth enhancement. 3. Subject categories and scope descriptions International Nuclear Information System (INIS) 2002-01-01 This document is one in a series of publications known as the ETDE/INIS Joint Reference Series. It defines the subject categories and provides the scope descriptions to be used for categorization of the nuclear literature for the preparation of INIS and ETDE input by national and regional centres. Together with the other volumes of the INIS Reference Series it defines the rules, standards and practices and provides the authorities to be used in the International Nuclear Information System and ETDE. A complete list of the volumes published in the INIS Reference Series may be found on the inside front cover of this publication. This INIS/ETDE Reference Series document is intended to serve two purposes: to define the subject scope of the International Nuclear Information System (INIS) and the Energy Technology Data Exchange (ETDE) and to define the subject classification scheme of INIS and ETDE. It is thus the guide to the inputting centres in determining which items of literature should be reported, and in determining where the full bibliographic entry and abstract of each item should be included in INIS or ETDE database. Each category is identified by a category code consisting of three alphanumeric characters. A scope description is given for each subject category. The scope of INIS is the sum of the scopes of all the categories. With most categories cross references are provided to other categories where appropriate. Cross references should be of assistance in finding the appropriate category; in fact, by indicating topics that are excluded from the category in question, the cross references help to clarify and define the scope of the category to which they are appended. A Subject Index is included as an aid to subject classifiers, but it is only an aid and not a means for subject classification. It facilitates the use of this document, but is no substitute for the description of the scope of the subject categories 4. Radiation protection in category III large gamma irradiators International Nuclear Information System (INIS) Costa, Neivaldo; Furlan, Gilberto Ribeiro; Itepan, Natanael Marcio 2011-01-01 This article discusses the advantages of category III large gamma irradiator compared to the others, with emphasis on aspects of radiological protection, in the industrial sector. This category is a kind of irradiators almost unknown to the regulators authorities and the industrial community, despite its simple construction and greater radiation safety intrinsic to the model, able to maintain an efficiency of productivity comparable to those of category IV. Worldwide, there are installed more than 200 category IV irradiators and there is none of a category III irradiator in operation. In a category III gamma irradiator, the source remains fixed in the bottom of the tank, always shielded by water, negating the exposition risk. Taking into account the benefits in relation to radiation safety, the category III large irradiators are highly recommended for industrial, commercial purposes or scientific research. (author) 5. How categories come to matter DEFF Research Database (Denmark) Leahu, Lucian; Cohn, Marisa; March, Wendy 2013-01-01 In a study of users' interactions with Siri, the iPhone personal assistant application, we noticed the emergence of overlaps and blurrings between explanatory categories such as "human" and "machine". We found that users work to purify these categories, thus resolving the tensions related to the ...... initial data analysis, due to our own forms of latent purification, and outline the particular analytic techniques that helped lead to this discovery. We thus provide an illustrative case of how categories come to matter in HCI research and design.......In a study of users' interactions with Siri, the iPhone personal assistant application, we noticed the emergence of overlaps and blurrings between explanatory categories such as "human" and "machine". We found that users work to purify these categories, thus resolving the tensions related... 6. The composition of category conjunctions. Science.gov (United States) Hutter, Russell R C; Crisp, Richard J 2005-05-01 In three experiments, the authors investigated the impression formation process resulting from the perception of familiar or unfamiliar social category combinations. In Experiment 1, participants were asked to generate attributes associated with either a familiar or unfamiliar social category conjunction. Compared to familiar combinations, the authors found that when the conjunction was unfamiliar, participants formed their impression less from the individual constituent categories and relatively more from novel emergent attributes. In Experiment 2, the authors replicated this effect using alternative experimental materials. In Experiment 3, the effect generalized to additional (orthogonally combined) gender and occupation categories. The implications of these findings for understanding the processes involved in the conjunction of social categories, and the formation of new stereotypes, are discussed. 7. Enhanced yield of phenolic extracts from banana peels (Musa acuminata Colla AAA) and cinnamon barks (Cinnamomum varum) and their antioxidative potentials in fish oil. Science.gov (United States) Anal, Anil Kumar; Jaisanti, Sirorat; Noomhorm, Athapol 2014-10-01 The bioactive compounds of banana peels and cinnamon barks were extracted by vacuum microwave and ultrasonic-assisted extraction methods at pre-determined temperatures and times. These methods enhance the yield extracts in shorter time. The highest yields of both extracts were obtained from the conditions which employed the highest temperature and the longest time. The extracts' yield from cinnamon bark method was higher by ultrasonic than vacuum microwave method, while vacuum microwave method gave higher extraction yield from banana peel than ultrasonic method. The phenolic contents of cinnamon bark and banana peel extracts were 467 and 35 mg gallic acid equivalent/g extract, respectively. The flavonoid content found in banana peel and cinnamon bark extracts were 196 and 428 mg/g quercetin equivalent, respectively. In addition, it was found that cinnamon bark gave higher 2,2-Diphenyl-1-1 picryhydrazyl (DPPH) radical scavenging activity and total antioxidant activity (TAA). The antioxidant activity of the extracts was analyzed by measuring the peroxide and p-anisidine values after oxidation of fish oils, stored for a month (30 days) at 25 °C and showed lesser peroxide and p-anisidine values in the fish oils containing the sample extracts in comparison to the fish oil without containing any extract. The banana peel and cinnamon extracts had shown the ability as antioxidants to prevent the oxidation of fish oil and might be considered as rich sources of natural antioxidant. 8. Diarylheptanoid Glycosides of Morella salicifolia Bark Directory of Open Access Journals (Sweden) Edna Makule 2017-12-01 Full Text Available A methanolic extract of Morella salicifolia bark was fractionated by various chromatographic techniques yielding six previously unknown cyclic diarylheptanoids, namely, 7-hydroxymyricanol 5-O-β-d-glucopyranoside (1, juglanin B 3-O-β-d-glucopyranoside (2, 16-hydroxyjuglanin B 17-O-β-d-glucopyranoside (3, myricanone 5-O-β-d-gluco-pranosyl-(1→6-β-d-glucopyranoside (4, neomyricanone 5-O-β-d-glucopranosyl-(1→6-β-d-glucopyranoside (5, and myricanone 17-O-α-l-arabino-furanosyl-(1→6-β-d-glucopyranoside (6, respectively, together with 10 known cyclic diarylheptanoids. The structural diversity of the diarylheptanoid pattern in M. salicifolia resulted from varying glycosidation at C-3, C-5, and C-17 as well as from substitution at C-11 with hydroxy, carbonyl or sulfate groups, respectively. Structure elucidation of the isolated compounds was achieved on the basis of one- and two-dimensional nuclear magnetic resonance (NMR as well as high-resolution electrospray ionisation mass spectrometry (HR-ESI-MS analyses. The absolute configuration of the glycosides was confirmed after hydrolysis and synthesis of O-(S-methyl butyrated (SMB sugar derivatives by comparison of their 1H-NMR data with those of reference sugars. Additionally, absolute configuration of diarylheptanoid aglycones at C-11 was determined by electronic circular dichroism (ECD spectra simulation and comparison with experimental CD spectra after hydrolysis. 9. The Use of Near-Infrared (NIR) Spectroscopy and Principal Component Analysis (PCA) to Discriminate Bark and Wood of the Most Common Species of the Pellet Sector DEFF Research Database (Denmark) Toscano, Giuseppe; Rinnan, Åsmund; Pizzi, Andrea 2017-01-01 related to origin and source, difficult to investigate through traditional analyses, such as the type of wood (hardwood/softwood) and the presence of bark. The development of a rapid technique able to provide this information could be an advantageous tool for the energy sector proving indications....../bark blends (2%-20% (w/w)) were analyzed, indicating the ability of the system to recognize blends from pure material. This study has shown that spectroscopy coupled with multivariate data analysis is a useful tool verifying the compliance of producer declarations and assisting experts in evaluation... 10. How do Category Managers Manage? DEFF Research Database (Denmark) Hald, Kim Sundtoft; Sigurbjornsson, Tomas 2013-01-01 The aim of this research is to explore the managerial role of category managers in purchasing. A network management perspective is adopted. A case based research methodology is applied, and three category managers managing a diverse set of component and service categories in a global production...... firm is observed while providing accounts of their progress and results in meetings. We conclude that the network management classification scheme originally deve loped by Harland and Knight (2001) and Knight and Harland (2005) is a valuable and fertile theoretical framework for the analysis... 11. Analgesic and anti-inflammatory activity of root bark of Grewia asiatica Linn. in rodents Directory of Open Access Journals (Sweden) Udaybhan Singh Paviaya 2013-01-01 Conclusions: The present study indicates that root bark of G. asiatica exhibits peripheral and central analgesic effect and anti-inflammatory activity, which may be attributed to the various phytochemicals present in root bark of G. asiatica. 12. Bark thickness related to tree diameter in sugar maple (Acer saccharum Marsh.) Science.gov (United States) H. Clay Smith 1969-01-01 Bark thickness for sugar maple trees in Vermont was found to be related to tree diameter at breast height (d.b.h.). The relationship was positive-as the diameter increased, the bark thickness increased. 13. Homological algebra in -abelian categories Indian Academy of Sciences (India) Deren Luo 2017-08-16 Aug 16, 2017 ... Homological algebra in n-abelian categories. 627. We recall the Comparison lemma, together with its dual, plays a central role in the sequel. Lemma 2.1 [13, Comparison lemma 2.1]. Let C be an additive category and X ∈ Ch. ≥0(C) a complex such that for all k ≥ 0the morphism dk+1. X is a weak cokernel ... 14. Data categories for marine planning Science.gov (United States) Lightsom, Frances L.; Cicchetti, Giancarlo; Wahle, Charles M. 2015-01-01 The U.S. National Ocean Policy calls for a science- and ecosystem-based approach to comprehensive planning and management of human activities and their impacts on America’s oceans. The Ocean Community in Data.gov is an outcome of 2010–2011 work by an interagency working group charged with designing a national information management system to support ocean planning. Within the working group, a smaller team developed a list of the data categories specifically relevant to marine planning. This set of categories is an important consensus statement of the breadth of information types required for ocean planning from a national, multidisciplinary perspective. Although the categories were described in a working document in 2011, they have not yet been fully implemented explicitly in online services or geospatial metadata, in part because authoritative definitions were not created formally. This document describes the purpose of the data categories, provides definitions, and identifies relations among the categories and between the categories and external standards. It is intended to be used by ocean data providers, managers, and users in order to provide a transparent and consistent framework for organizing and describing complex information about marine ecosystems and their connections to humans. 15. Tannins quantification in barks of Mimosa tenuiflora and Acacia mearnsii Directory of Open Access Journals (Sweden) Leandro Calegari 2016-03-01 Full Text Available Due to its chemical complexity, there are several methodologies for vegetable tannins quantification. Thus, this work aims at quantifying both tannin and non-tannin substances present in the barks of Mimosa tenuiflora and Acacia mearnsii by two different methods. From bark particles of both species, analytical solutions were produced by using a steam-jacketed extractor. The solution was analyzed by Stiasny and hide-powder (no chromed methods. For both species, tannin levels were superior when analyzed by hide-powder method, reaching 47.8% and 24.1% for A. mearnsii and M. tenuiflora, respectively. By Stiasny method, the tannins levels considered were 39.0% for A. mearnsii, and 15.5% for M. tenuiflora. Despite the best results presented by A. mearnsii, the bark of M. tenuiflora also showed great potential due to its considerable amount of tannin and the availability of the species at Caatinga biome. 16. Antimicrobial activity of some medicinal barks used in Peruvian Amazon. Science.gov (United States) Kloucek, P; Svobodova, B; Polesny, Z; Langrova, I; Smrcek, S; Kokoska, L 2007-05-04 The aim of this study was to evaluate the antimicrobial activity of six barks traditionally used in Callería District (Ucayali Department, Peru) for treating conditions likely to be associated with microorganisms. Ethanol extracts of stem barks of Abuta grandifolia (Menispermaceae), Dipteryx micrantha (Leguminosae), Cordia alliodora (Boraginaceae), Naucleopsis glabra (Moraceae), Pterocarpus rohrii (Leguminosae), and root bark of Maytenus macrocarpa (Celastraceae) were tested against nine bacteria and one yeast using the broth microdilution method. All plants possessed significant antimicrobial effect, however, the extract of Naucleopsis glabra exhibited the strongest activity against Gram-positive bacteria (MICs ranging from 62.5 to 125 microg/ml), while the broadest spectrum of action was shown by the extract of Maytenus macrocarpa, which inhibited all the strains tested with MICs ranging from 125 to 250 microg/ml. 17. A phloem sandwich allowing attack and colonization by bark beetles (Coleoptera: Scolytidae) and associates Science.gov (United States) Andrew D. Taylor; Jane L. Hayes; John C. Moser 1992-01-01 Much of the life cycles of bark beetles and their associates are spent under the bark of the host tree and are impossible to observe under completely natural conditions. To observe the behavior and development of insects in the phloem layer, phloem sandwiches have been developed, in which a piece of bark and phloem is removed from a live tree and pressed against a... 18. Progress in the chemistry of shortleaf and loblolly pine bark flavonoids Science.gov (United States) R.W. Hemingway 1976-01-01 The forest products industries of the southern United States harvest approximately 7 million dry tons of pine bark each year. This resource receives little utilization other than recovery of fuel values. approximately 2 million dry tons (30-40% of bark dry weight) of potentially valuable polyflavonoids are burned annually. Conifer bark flavonoids have potential... 19. Grinding and classification of pine bark for use as plywood adhesive filler Science.gov (United States) Thomas L. Eberhardt; Karen G. Reed 2005-01-01 Prior efforts to incorporate bark or bark extracts into composites have met with only limited success because of poor performance relative to existing products and/or economic barriers stemming from high levels of processing. We are currently investigating applications for southern yellow pine (SYP) bark that require intermediate levels of processing, one being the use... 20. Antioxidant Activity and Cytotoxicity of the Leaf and Bark Extracts of ... African Journals Online (AJOL) Purpose: To investigate the antioxidant potential and cytotoxicity of the leaf and bark extracts of Tarchonanathus campharatus.. Methods: The antioxidant activity of the aqueous leaf extract (Aq LF), methanol leaf extract (MET LF), dichloromethane leaf extract (DCM LF), methanol bark extract (MET BK), dichloromethane bark ... 1. Comparisons of protein profiles of beech bark disease resistant and susceptible American beech (Fagus grandifolia) Science.gov (United States) Mary E. Mason; Jennifer L. Koch; Marek Krasowski; Judy. Loo 2013-01-01 Beech bark disease is an insect-fungus complex that damages and often kills American beech trees and has major ecological and economic impacts on forests of the northeastern United States and southeastern Canadian forests. The disease begins when exotic beech scale insects feed on the bark of trees, and is followed by infection of damaged bark tissues by one of the... 2. Dutch elm disease pathogen transmission by the banded elm bark beetle Scolytus schevyrewi Science.gov (United States) W. R. Jacobi; R. D. Koski; J. F. Negron 2013-01-01 Dutch Elm Disease (DED) is a vascular wilt disease of Ulmus species (elms) incited in North America primarily by the exotic fungus Ophiostoma novo-ulmi. The pathogen is transmitted via root grafts and elm bark beetle vectors, including the native North American elm bark beetle, Hylurgopinus rufipes and the exotic smaller European elm bark beetle, Scolytus multistriatus... 3. Depositional characteristics of atmospheric polybrominated diphenyl ethers on tree barks Directory of Open Access Journals (Sweden) Man Young Chun 2014-07-01 Full Text Available Objectives This study was conducted to determine the depositional characteristics of several tree barks, including Ginkgo (Ginkgo biloba, Pine (Pinus densiflora, Platanus (Platanus, and Metasequoia (Metasequoia glyptostroboides. These were used as passive air sampler (PAS of atmospheric polybrominated diphenyl ethers (PBDEs. Methods Tree barks were sampled from the same site. PBDEs were analyzed by highresolution gas chromatography/high-resolution mass spectrometer, and the lipid content was measured using the gravimetric method by n-hexane extraction. Results Gingko contained the highest lipid content (7.82 mg/g dry, whereas pine (4.85 mg/g dry, Platanus (3.61 mg/g dry, and Metasequoia (0.97 mg/g dry had relatively lower content. The highest total PBDEs concentration was observed in Metasequoia (83,159.0 pg/g dry, followed by Ginkgo (53,538.4 pg/g dry, Pine (20,266.4 pg/g dry, and Platanus (12,572.0 pg/g dry. There were poor correlations between lipid content and total PBDE concentrations in tree barks (R2=0.1011, p =0.682. Among the PBDE congeners, BDE 206, 207 and 209 were highly brominated PBDEs that are sorbed to particulates in ambient air, which accounted for 90.5% (84.3-95.6% of the concentration and were therefore identified as the main PBDE congener. The concentrations of particulate PBDEs deposited on tree barks were dependent on morphological characteristics such as surface area or roughness of barks. Conclusions Therefore, when using the tree barks as the PAS of the atmospheric PBDEs, samples belonging to same tree species should be collected to reduce errors and to obtain reliable data. 4. Depositional characteristics of atmospheric polybrominated diphenyl ethers on tree barks. Science.gov (United States) Chun, Man Young 2014-07-17 This study was conducted to determine the depositional characteristics of several tree barks, including Ginkgo (Ginkgo biloba), Pine (Pinus densiflora), Platanus (Platanus), and Metasequoia (Metasequoia glyptostroboides). These were used as passive air sampler (PAS) of atmospheric polybrominated diphenyl ethers (PBDEs). Tree barks were sampled from the same site. PBDEs were analyzed by highresolution gas chromatography/high-resolution mass spectrometer, and the lipid content was measured using the gravimetric method by n-hexane extraction. Gingko contained the highest lipid content (7.82 mg/g dry), whereas pine (4.85 mg/g dry), Platanus (3.61 mg/g dry), and Metasequoia (0.97 mg/g dry) had relatively lower content. The highest total PBDEs concentration was observed in Metasequoia (83,159.0 pg/g dry), followed by Ginkgo (53,538.4 pg/g dry), Pine (20,266.4 pg/g dry), and Platanus (12,572.0 pg/g dry). There were poor correlations between lipid content and total PBDE concentrations in tree barks (R(2)=0.1011, p =0.682). Among the PBDE congeners, BDE 206, 207 and 209 were highly brominated PBDEs that are sorbed to particulates in ambient air, which accounted for 90.5% (84.3-95.6%) of the concentration and were therefore identified as the main PBDE congener. The concentrations of particulate PBDEs deposited on tree barks were dependent on morphological characteristics such as surface area or roughness of barks. Therefore, when using the tree barks as the PAS of the atmospheric PBDEs, samples belonging to same tree species should be collected to reduce errors and to obtain reliable data. 5. Antioxidant and antimicrobial activities of Bauhinia racemosa L. stem bark Directory of Open Access Journals (Sweden) Kumar R.S. 2005-01-01 Full Text Available The present study was carried out to evaluate the antioxidant and antimicrobial activities of a methanol extract of Bauhinia racemosa (MEBR (Caesalpiniaceae stem bark in various systems. 1,1-Diphenyl-2-picryl-hydrazyl (DPPH radical, superoxide anion radical, nitric oxide radical, and hydroxyl radical scavenging assays were carried out to evaluate the antioxidant potential of the extract. The antioxidant activity of the methanol extract increased in a concentration-dependent manner. About 50, 100, 250, and 500 µg MEBR inhibited the peroxidation of a linoleic acid emulsion by 62.43, 67.21, 71.04, and 76.83%, respectively. Similarly, the effect of MEBR on reducing power increased in a concentration-dependent manner. In DPPH radical scavenging assays the IC50 value of the extract was 152.29 µg/ml. MEBR inhibited the nitric oxide radicals generated from sodium nitroprusside with an IC50 of 78.34 µg/ml, as opposed to 20.4 µg/ml for curcumin. Moreover, MEBR scavenged the superoxide generated by the PMS/NADH-NBT system. MEBR also inhibited the hydroxyl radical generated by Fenton's reaction, with an IC50 value of more than 1000 µg/ml, as compared to 5 µg/ml for catechin. The amounts of total phenolic compounds were also determined and 64.7 µg pyrocatechol phenol equivalents were detected in MEBR (1 mg. The antimicrobial activities of MEBR were determined by disc diffusion with five Gram-positive, four Gram-negative and four fungal species. MEBR showed broad-spectrum antimicrobial activity against all tested microorganisms. The results obtained in the present study indicate that MEBR can be a potential source of natural antioxidant and antimicrobial agents. 6. Lead isotope ratios in tree bark pockets: an indicator of past air pollution in the Czech Republic. Science.gov (United States) Conkova, M; Kubiznakova, J 2008-10-15 Tree bark pockets were collected at four sites in the Czech Republic with differing levels of lead (Pb) pollution. The samples, spanning 1923-2005, were separated from beech (Fagus sylvatica) and spruce (Picea abies). Elevated Pb content (0.1-42.4 microg g(-1)) reflected air pollution in the city of Prague. The lowest Pb content (0.3-2.6 microg g(-1)) was found at the Kosetice EMEP "background pollution" site. Changes in (206)Pb/(207)Pb and (208)Pb/(206)Pb isotope ratios were in agreement with operation times of the Czech main anthropogenic Pb sources. Shortly after the Second World War, the (206)Pb/(207)Pb isotope ratio in bark pockets decreased from 1.17 to 1.14 and the (208)Pb/(206)Pb isotope ratio increased from 2.12 to 2.16. Two dominant emission sources responsible for these changes, lignite and leaded petrol combustion, contributed to the shifts in Pb isotope ratios. Low-radiogenic petrol Pb ((206)Pb/(207)Pb of 1.11) lead to lower (206)Pb/(207)Pb in bark pockets over time. High-radiogenic lignite-derived Pb ((206)Pb/(207)Pb of 1.18 to 1.19) was detected in areas affected by coal combustion rather than by traffic. 7. Chemical composition of barks from Quercus faginea trees and characterization of their lipophilic and polar extracts. Science.gov (United States) Ferreira, Joana P A; Miranda, Isabel; Sousa, Vicelina B; Pereira, Helena 2018-01-01 The bark from Quercus faginea mature trees from two sites was chemically characterized for the first time. The barks showed the following composition: ash 14.6%, total extractives 13.2%, suberin 2.9% and lignin 28.2%. The polysaccharides were composed mainly of glucose and xylose (50.3% and 35.1% of all monosaccharides respectively) with 4.8% of uronic acids. The suberin composition was: ω-hydroxyacids 46.3% of total compounds, ɑ,ω-alkanoic diacids 22.3%, alkanoic acids 5.9%, alkanols 6.7% and aromatics 6.9% (ferulic acid 4.0%). Polar extracts (ethanol-water) had a high phenolic content of 630.3 mg of gallic acid equivalents (GAE)/g of extract, condensed tannins 220.7 mg of catechin equivalents (CE)/g extract, and flavonoids 207.7 mg CE/g of extract. The antioxidant activity was very high corresponding to 1567 mg Trolox equivalents/g of extract, and an IC50 of 2.63 μg extract/ml. The lipophilic extracts were constituted mainly by glycerol and its derivatives (12.3% of all compounds), alkanoic acids (27.8%), sterols (11.5%) and triterpenes (17.8%). In view of an integrated valorization, Quercus faginea barks are interesting sources of polar compounds including phenols and polyphenols with possible interesting bioactivities, while the sterols and triterpenes contained in the lipophilic extracts are also valuable bioactive compounds or chemical intermediates for specific high-value market niches, such as cosmetics, pharmaceuticals and biomedicine. 8. Category O for quantum groups DEFF Research Database (Denmark) Andersen, Henning Haahr; Mazorchuk, Volodymyr 2015-01-01 We study the BGG-categories O_q associated to quantum groups. We prove that many properties of the ordinary BGG-category O for a semisimple complex Lie algebra carry over to the quantum case. Of particular interest is the case when q is a complex root of unity. Here we prove a tensor decomposition...... for simple modules, projective modules, and indecomposable tilting modules. Using the known Kazhdan–Lusztig conjectures for O and for finite-dimensional U_q-modules we are able to determine all irreducible characters as well as the characters of all indecomposable tilting modules in O_q . As a consequence......, we also recover the known result that the generic quantum case behaves like the classical category O.... 9. Improvement of nutritive value of acacia mangium bark by alkali treatment Directory of Open Access Journals (Sweden) Elizabeth Wina 2001-10-01 Full Text Available Bark, especially from Acacia mangium is a by-product from wood processing industries that commonly found in Indonesiaand in big amount will cause environmental problems. One of the alternatives to utilize bark is for animal feed. The aims of this experiment are to improve the nutritive value of bark by alkali treatments (urea and sodium hydroxide and to determine the level of substitution of elephant grass by bark. The experiment consisted of 3 in vitro studies and 1 in sacco study. In vitro studies consisted of 1 the use of urea or NaOH by wetting and incubation-method, 2 the use of different concentration of Na OH (0-4% by soaking method, 3 determination of substitution level of elephant grass by treated bark. In sacco study was conducted at 0, 6, 12, 24, 48 and 72 hours of incubation to compare the degradation of treated bark to elephant grass. The results show that urea treatment did not improve DM or OM digestibilities of bark. Soaking bark in 4% NaOH solution was more effective than wetting and incubation-method in improving in vitro digestibility. (49.26% vs19.56% for soaking and dry-method, respectively. In sacco studyl shows that treated bark had a very high solubility at 0 hour incubation but the degradation at 72 hours incubation was not significantly different from that of 0 hour incubation. The gas produced at in vitro study of treated bark was very low indicated that there was no degradation of bark at all. The level of substitution of elephant grass by treated bark up to 30% gave a non-significant digestibility value to that of 100% elephant grass. In conclusion, bark after tannin-extraction was a better feedstuff for animal feed. The soaking method in 4% NaOH solution improved the digestibility of bark significantly and the level of substitution of elephant grass by treated bark was 30%. 10. Hippocampal activation during episodic and semantic memory retrieval: comparing category production and category cued recall. Science.gov (United States) Ryan, Lee; Cox, Christine; Hayes, Scott M; Nadel, Lynn 2008-01-01 Whether or not the hippocampus participates in semantic memory retrieval has been the focus of much debate in the literature. However, few neuroimaging studies have directly compared hippocampal activation during semantic and episodic retrieval tasks that are well matched in all respects other than the source of the retrieved information. In Experiment 1, we compared hippocampal fMRI activation during a classic semantic memory task, category production, and an episodic version of the same task, category cued recall. Left hippocampal activation was observed in both episodic and semantic conditions, although other regions of the brain clearly distinguished the two tasks. Interestingly, participants reported using retrieval strategies during the semantic retrieval task that relied on autobiographical and spatial information; for example, visualizing themselves in their kitchen while producing items for the category kitchen utensils. In Experiment 2, we considered whether the use of these spatial and autobiographical retrieval strategies could have accounted for the hippocampal activation observed in Experiment 1. Categories were presented that elicited one of three retrieval strategy types, autobiographical and spatial, autobiographical and nonspatial, and neither autobiographical nor spatial. Once again, similar hippocampal activation was observed for all three category types, regardless of the inclusion of spatial or autobiographical content. We conclude that the distinction between semantic and episodic memory is more complex than classic memory models suggest. 11. FINANCIAL CONTROL AS A CATEGORY Directory of Open Access Journals (Sweden) Andrey Yu. Volkov 2014-01-01 Full Text Available The article reveals the basics of “financial control” as a category. The main attention is concentrated on the “control” itself (asa term, multiplicity of interpretation of“financial control” term and its juristic-practical matching. The duality of financial control category is detected. The identity of terms “financial control” and “state financial control” is justified. The article also offers ways of development of financial control juristical regulation. 12. The usability of tree barks as long term biomonitors of atmospheric radionuclide deposition Energy Technology Data Exchange (ETDEWEB) Belivermis, Murat, E-mail: [email protected] [Istanbul University, Faculty of Science, Department of Biology, 34134 Vezneciler, Istanbul (Turkey); Kilic, Onder, E-mail: [email protected] [Istanbul University, Faculty of Science, Department of Biology, 34134 Vezneciler, Istanbul (Turkey); Cotuk, Yavuz, E-mail: [email protected] [Istanbul University, Faculty of Science, Department of Biology, 34134 Vezneciler, Istanbul (Turkey); Topcuoglu, Sayhan, E-mail: [email protected] [Istanbul University, Faculty of Science, Department of Biology, 34134 Vezneciler, Istanbul (Turkey); Kalayci, Guelsah, E-mail: [email protected] [Istanbul University, Faculty of Science, Department of Biology, 34134 Vezneciler, Istanbul (Turkey); Pestreli, Didem, E-mail: [email protected] [Istanbul University, Faculty of Science, Department of Biology, 34134 Vezneciler, Istanbul (Turkey) 2010-12-15 In view of the lower radionuclide activities of moss and lichen, tree barks can be used as biomonitors of radioactive contamination, regardless of the contribution of soil uptake. The present study was conducted to determine the activity concentrations of {sup 137}Cs, {sup 40}K, {sup 232}Th and {sup 238}U in the barks of pine (Pinus nigra) and oak (Quercus petraea) trees collected from the Thrace region in Turkey. By considering the previous studies carried out in the same region, it is noticed that among lichen, moss, oak bark and pine bark, oak bark is the best accumulator of {sup 137}Cs and natural radionuclides. 13. International Conference on Category Theory CERN Document Server Pedicchio, Maria; Rosolini, Guiseppe 1991-01-01 With one exception, these papers are original and fully refereed research articles on various applications of Category Theory to Algebraic Topology, Logic and Computer Science. The exception is an outstanding and lengthy survey paper by Joyal/Street (80 pp) on a growing subject: it gives an account of classical Tannaka duality in such a way as to be accessible to the general mathematical reader, and to provide a key for entry to more recent developments and quantum groups. No expertise in either representation theory or category theory is assumed. Topics such as the Fourier cotransform, Tannaka duality for homogeneous spaces, braided tensor categories, Yang-Baxter operators, Knot invariants and quantum groups are introduced and studies. From the Contents: P.J. Freyd: Algebraically complete categories.- J.M.E. Hyland: First steps in synthetic domain theory.- G. Janelidze, W. Tholen: How algebraic is the change-of-base functor?.- A. Joyal, R. Street: An introduction to Tannaka duality and quantum groups.- A. Jo... 14. Learnable Classes of Categorial Grammars. Science.gov (United States) Kanazawa, Makoto Learnability theory is an attempt to illuminate the concept of learnability using a mathematical model of learning. Two models of learning of categorial grammars are examined here: the standard model, in which sentences presented to the learner are flat strings of words, and one in which sentences are presented in the form of functor-argument… 15. Language universals without universal categories NARCIS (Netherlands) Croft, W.; van Lier, E. 2012-01-01 In this article, the authors present their views on an article by author Sandra Chung related to lexical categories. According to them, Chung's article critiques an analysis of word classes in Chamorro by author Donald M. Topping. They discuss the restatements made by Chung on Topping's criteria for 16. Auditory and phonetic category formation NARCIS (Netherlands) Goudbeek, Martijn; Cutler, A.; Smits, R.; Swingley, D.; Cohen, Henri; Lefebvre, Claire 2017-01-01 Among infants' first steps in language acquisition is learning the relevant contrasts of the language-specific phonemic repertoire. This learning is viewed as the formation of categories in a multidimensional psychophysical space. Research in the visual modality has shown that for adults, some kinds 17. Possible antimicrobial activity of Morinda lucida stem bark, leaf and ... African Journals Online (AJOL) MR FAKOYA AKINDELE 2014-01-15 Jan 15, 2014 ... are used in the treatment of different types of diseases. Roots, barks or leaves of Newbolbea leavis are used in the treatment of dysentery, syphilis, ear ache, ringworm and scrotal elephantiasis (Azoro, 2002.) Morinda lucida known as Oruwo in the South-Western part of Nigeria is a medium sized tree with a ... 18. Phytochemical and Antimicrobial Screening of the Stem Bark ... African Journals Online (AJOL) acer A. F. Gabriel and H.O. Onigbanjo Phytochemical and Antimicrobial Screening of the Stem Bark Extracts of Pterocarpus erinaceus (Poir). 3. Table 2: Sensitivity test results of the extracts. Extracts. Organisms / Zones of Inhibition (mm). Ca. S a. Ec. Bs. Ps. OV. C. E. S. C. M. SC. Crude Methanol. -. -. 20. 20. 20. N. N. N. Hexane ... 19. Heavy metals content in the stem bark of Detarium microcarpum ... African Journals Online (AJOL) The heavy metal analysis was carried out on the stem bark of D. microcarpum using an atomic absorption spectrophotometer (AAS). The heavy metals screened for include: lead, chromium, manganese, zinc and iron. The levels of manganese, zinc and iron were 13.91, 4.89 and 21.89 mg/L respectively. These heavy metals ... 20. Effect of an Aqueous Extract of Entandrophragma utile Bark on ... African Journals Online (AJOL) Adjunct therapy is needed for patients with compromised gastrointestinal mucosa due to necessary aspirin usage against cardiovascular disorders. We tested the Nigerian bark extract of Entandrophragma utile on gastric acid secretion (GA) and peptic activity (PA). Rats were ligated at the pylorus for collection of gastric ... 1. Some behavioural studies on methanol root bark extract of Burkea ... African Journals Online (AJOL) The research was conducted to evaluate some central nervous system properties of the root bark methanol extractof B. africana in mice. It involved the following animal models: diazepam-induced sleep, hole-board and walking beam assay. Results: The methanol extract showed a significant decrease in the onset of sleep ... 2. Gum from the bark of Anogeissius leiocarpus as a potential ... African Journals Online (AJOL) Gum from the bark of Anogeissius leiocarpusas a potential pharmaceutical raw material – granule properties. Philip F Builders, Olubayo O Kunle, Yetunde C Isimi. Abstract. With the continuous effort to discover and produce cheap but high quality excipients for drug production Anogeissius leiocarpus gum (ALG), a brownish ... 3. Antimicrobial activity of Diospyros melanoxylon bark from Similipal ... African Journals Online (AJOL) STORAGESEVER 2009-05-04 May 4, 2009 ... Phytomedicines have been an integral part of traditional .... inhibitory concentration (MIC) and minimal bactericidal concentration (MBC) of D. melanoxylon bark extracts on bacterial strains. S. aureusa. S. epidermidisa. B. licheniformisa. E. colia ... wrappers in the bidi (cigarette) industry (Mallavadhani et. 4. Stem Bark Extracts of Ficus exasperata protects the Liver against ... African Journals Online (AJOL) Ficus exasperata is an important medicinal plant with a wide geographical distribution in Africa particularly in Nigeria. In this study, aqueous stem bark extracts of Ficus exasperata were administered to investigate its hepatoprotective effects on Paracetamol induced liver toxicity in Wistar rats. A total of Twenty Five Wistar rats ... 5. Effects of bioactive principles from stem bark extract of Quassia ... African Journals Online (AJOL) Chigo Okwuosa Effects of bioactive principles from stem bark extract of Quassia amara, Quassin and 2-methoxycanthine-6-one, on haematological parameters in albino rats. Raji Yinusa. Department of Physiology, College of Medicine, University of Ibadan. Nigeria. Summary:The effect of Quassia amara extract and two isolated compounds ... 6. Phytochemical screening and antibacterial evaluation of stem bark ... African Journals Online (AJOL) SERVER 2007-07-04 Jul 4, 2007 ... Mallotus philippinensis var. Tomentosus is a medicinal plant, which was tested against Escherichia coli,. Klebsiella pneumoniae, Pseudomonas aeruginosa, Salmonella typhi and Bacillus subtilis. Phytochemi- cal screening of the stem bark of M. philippinensis indicates the presence of secondary ... 7. Aqueous Bark Extract of Cinnamomum Zeylanicum : A Potential ... African Journals Online (AJOL) Aqueous Bark Extract of Cinnamomum Zeylanicum : A Potential Therapeutic Agent for Streptozotocin- Induced Type 1 Diabetes Mellitus (T1DM) Rats. ... Methods: The animals were divided into three groups (n = 6). of normal rats; streptozotocin-induced diabetic rats; and diabetic rats treated with 200 mg/kg of the aqueous ... 8. Antimicrobial activity of Diospyros melanoxylon bark from Similipal ... African Journals Online (AJOL) The antimicrobial activity of five extracts of Diospyros melanoxylon Roxb. bark collected from Similipal Biosphere Reserve, Orissa was evaluated against human pathogenic bacteria and fungi. The extracts including both polar and non polar solvents; petroleum ether, chloroform, ethanol, methanol and aqueous were ... 9. Gut bacteria of bark and wood boring beetles Science.gov (United States) Archana Vasanthakumar; Yasmin Cardoza; Italo Delalibera; Patrick Schloss; Jo Handelsman; Kier Klepzig; Kenneth Raffa 2007-01-01 Bark beetles are known to have complex associations with a variety of microorganisms (Paine and others 1987; Ayres and others 2000; Six and Klepzig 2004). However, most of our knowledge involves fungi, particularly external species. In contrast, we know very little about their associations with bacterial gut symbionts (Bridges 1981). Similarly, work with wood... 10. Anthelmintic and Other Pharmacological Activities of the Root Bark ... African Journals Online (AJOL) The anthelmintic activity of water, methanol and chloroform extracts of the root bark of Albizia anthelmintica on strongyle-type sheep nematode eggs and larvae were examined in vitro. In addition, pharmacological tests were carried out on the water extract to confirm other ethnomedical uses of the plant. The water extract ... 11. Hyperglycemic effect and hypertotoxicity studies of stem bark of ... African Journals Online (AJOL) Serum AST, ALT, ALP, glucose, bilirubin (total and direct) showed significant increase (P<0.05) in groups B and C rats but were lower than those of group A. The results indicate that the extract of Khaya senegalensis stem bark and highland (green) tea leaves caused increased activity of the liver enzymes studied which is ... 12. Constituents from the bark resin of Schinus molle OpenAIRE Malca-García,Gonzalo Rodolfo; Hennig,Lothar; Ganoza-Yupanqui,Mayar Luis; Piña-Iturbe,Alejandro; Bussmann,Rainer W. 2017-01-01 ABSTRACT A total of five terpenes was isolated from the bark resin of Schinus molle L., Anacardiaceae, and their structures were determined by spectroscopic techniques. Among these compounds the sesquiterpene hydrocarbon terebinthene showed significant growth inhibitory activity against human colon carcinoma HCT-116 cells. Furthermore, terebinthene and pinicolic acid (5) also showed antibacterial activity against Staphylococcus aureus ATCC 25923 and Bacillus subtilis ATCC 6633. 13. Comparative study of thermal insulation boards from leaf and bark ... African Journals Online (AJOL) Thus, several researches have succeeded in using these plants and agro waste fibres in developing renewable and environmentally friendly thermal insulation products. The aim of this study was to compare the performance of insulation boards made from leave and bark fibres of Pilios tigma thonningii L.in terms of density, ... 14. Barking up the right tree: Understanding local attitudes towards dogs ... African Journals Online (AJOL) Barking up the right tree: Understanding local attitudes towards dogs in villages ... PROMOTING ACCESS TO AFRICAN RESEARCH ... for hunting, and 41.2% reported that their dog had killed at least one wild animal, with 11.8% reporting that ... 15. Clerodane diterpenes from bark of Croton urucurana baillon Energy Technology Data Exchange (ETDEWEB) Pizzolatti, Moacir G.; Bortoluzzi, Adailton J.; Brighente, Ines M.C.; Zuchinalli, Analice; Carvalho, Francieli K., E-mail: [email protected] [Universidade Federal de Santa Catarina (UFSC), Florianopolis, SC (Brazil). Departamento de Qumica; Candido, Ana C. S.; Peres, Marize T.L.P. [Universidade Federal do Mato Grosso do Sul (UFMS), Campo Grande, MS (Brazil). Departamento de Hidraulica e Transportes 2013-04-15 The new clerodane diterpene methyl 3-oxo-12-epibarbascoate was isolated from the stem barks of Croton urucurana together with the known diterpene methyl 12-epibarbascoate. The structures of these compounds were elucidated by spectroscopic techniques and comparison with the literature data. The obtainment of crystals allowed the crystallographic analysis of X-ray diffraction of diterpenes, thus confirming the proposed structures. (author) 16. Antimosquito Phenylpropenoids from the Stem and Root Barks of ... African Journals Online (AJOL) Michael Horsfall The plant species was identified on site and its identity was further confirmed at the Herbarium of the. Department of Botany, University of Dar es Salaam, where a voucher specimen is deposited. Extraction and Isolation: The air dried and pulverized root and stem barks were extracted sequentially with CHCl3 and MeOH, 2 x ... 17. Present state of beech bark disease in Germany Science.gov (United States) Klaus J. Lang 1983-01-01 Beech bark disease can be found at present time in young and old stands (20-150 years old) of Fagus sylvatica. The present state of the disease may be described as "normal" and apart from some cases, it is no threat to the existence of the stands. 18. assessment of acidity levels in eucalyptus camaldulensis barks from ... African Journals Online (AJOL) BARTH EKWUEME Bark samples of Eucalyptus camaldulensis obtained from Bauchi and Gombe States were analysed spectrophotometrically for their sulphate-sulphur content. The aim was to assess the extent of sulphur pollution in the environment. The results showed that S concentration ranged from 0.79 to 1.70mg/g for samples from ... 19. Assessment of acidity levels in Eucalyptus Camaldulensis barks ... African Journals Online (AJOL) Bark samples of Eucalyptus camaldulensis obtained from Bauchi and Gombe States were analysed spectrophotometrically for their sulphate-sulphur content. The aim was to assess the extent of sulphur pollution in the environment. The results showed that S concentration ranged from 0.79 to 1.70mg/g for samples from ... 20. Anti-inflammatory activity of bark of Xeromphis spinosa Directory of Open Access Journals (Sweden) Biswa Nath Das 2009-06-01 Full Text Available The bark of Xeromphis spinosa extracted by a mixture of equal proportions of petroleum ether, ethyl acetate and methanol at an oral dose of 200 and 400 mg/kg body weight exhibited significant anti-inflammatory activity when compared with control. 1. Bark beetle management after a mass attack - some Swiss experiences Science.gov (United States) B. Forster; F. Meier; R. Gall 2003-01-01 In 1990 and 1999, heavy storms accompanied by the worst gales ever recorded in Switzerland, struck Europe and left millions of cubic metres of windthrown Norway spruce trees; this provided breeding material for the eight-toothed spruce bark beetle (Ips typographus L.) and led to mass attacks in subsequent years which resulted in the additional loss... 2. Acute toxicity studies of aqueous stem bark extract of Ximenia ... African Journals Online (AJOL) STORAGESEVER 2008-05-16 May 16, 2008 ... the aqueous stem bark extract revealed the presence of cardiac ... needs of rural populations in African and other third world ... Table 1. Phytochemical screening of Ximenia Americana. ... Table 2. Post mortem gross pathology result of acute toxicity of ... while the treated groups showed variable weight loss. 3. In vitro Antibacterial Activity of Alchornea cordifolia Bark Extract ... African Journals Online (AJOL) Four extracts of Alchornea cordifolia (Schumach.) Müll. Arg. (Euphorbiaceae) bark, including aqueous, methanol, acetone and hexane extracts, were tested for their antibacterial activities against Salmonella typhi, Salmonella paratyphi A and Salmonella paratyphi B, using both agar diffusion and broth dilution methods. 4. Flavan and procyanidin glycosides from the bark of blackjack oak Science.gov (United States) Young-Soo Bae; Johann F.W. Burger; Jan P. Steynberg; Daniel Ferreira; Richard W. Hemingway 1994-01-01 The bark of blackjack oak contains (+)-catechin, (-)-epicatechin, (+)-3-O-[β-D-glucopyranosyl]-catechin, catechin-(4α→8}-catechin, epicatechin-(4β→8}-catechin as well as the novel 3-0-[β-D-glucopyranosyl]-catechin-(4α→8)-catechin and 3-0... 5. In vitro evaluation of inhibitory effect of Phoenix dactylifera bark ... African Journals Online (AJOL) Conclusion: The findings of this study indicate significant anti-lipid peroxidation and anti-hemolytic effects of the bark extract. Therefore, the extract can potentially be used for the in vivo treatment of diseases associated with lipid peroxidation such as cancers and Alzheimer's disease, but further studies are required. 6. Ethanol stem bark extract of Rauwolfia vomitoria ameliorates MPTP ... African Journals Online (AJOL) Methods: The Parkinson's disease was induced in rats by a single intraperitoneal (IP) injection of MPTP. After 72h of induction, the young adult male rats were treated with oral administration of stem bark ethanol extract of the plant daily for 2 weeks. The blood chemistry, antioxidant markers and brain dopamine levels were ... 7. Effects of the ethanolic stem bark extract of pterocarpus erinaceus ... African Journals Online (AJOL) This finding might lend credence to the use of the stem bark of the plant in the treatment of diarrhea and dysentery traditionally. From the results of this work and information from literature, flavonoids and tannins identified during phytochemical screening of the extract may be the biologically active components responsible ... 8. Antibacterial assessment of whole stem bark of Vitex doniana ... African Journals Online (AJOL) diffusion method and the minimum inhibitory concentration. The stem bark extracts were able to inhibit the growth pattern of the tested microorganisms. In all cases Shigella dysentariae showed the highest sensitivity. The results suggest that V. doniana may be valuable in the management of dysentery and gastroenteritis ... 9. Modulatory effect of Morinda lucida aqueous stem bark extract on ... African Journals Online (AJOL) Modulatory effect of Morinda lucida aqueous stem bark extract on blood glucose and lipid profile in alloxan-induced diabetic rats. ... 8th day of oral extract treatments while the blood samples for the lipid assays of were obtained directly from heart chambers through cardiac puncture on the 8th day after an overnight fasting. 10. Management strategies for bark beetles in conifer forests Science.gov (United States) Christopher Fettig; Jacek Hilszczański 2015-01-01 Several species of bark beetles (Coleoptera: Curculionidae, Scolytinae) are capable of causing significant amounts of tree mortality in conifer forests throughout much of the world. In most cases, these events are part of the ecology of conifer forests and positively influence many ecological processes, but the economic and social implications can be... 11. Strip-Bark Morphology and Radial Growth Trends in Ancient Pinus sibirica Trees From Central Mongolia Science.gov (United States) Leland, Caroline; Cook, Edward R.; Andreu-Hayles, Laia; Pederson, Neil; Hessl, Amy; Anchukaitis, Kevin J.; Byambasuren, Oyunsanaa; Nachin, Baatarbileg; Davi, Nicole; D'Arrigo, Rosanne; Griffin, Kevin; Bishop, Daniel A.; Rao, Mukund Palat 2018-03-01 Some of the oldest and most important trees used for dendroclimatic reconstructions develop strip-bark morphology, in which only a portion of the stem contains living tissue. Yet the ecophysiological factors initiating strip bark and the potential effect of cambial dieback on annual ring widths and tree-ring estimates of past climate remain poorly understood. Using a combination of field observations and tree-ring data, we investigate the causes and timing of cambial dieback events in Pinus sibirica strip-bark trees from central Mongolia and compare the radial growth rates and trends of strip-bark and whole-bark trees over the past 515 years. Results indicate that strip bark is more common on the southern aspect of trees, and dieback events were most prevalent in the 19th century, a cold and dry period. Further, strip-bark and whole-bark trees have differing centennial trends, with strip-bark trees exhibiting notably large increases in ring widths at the beginning of the 20th century. We find a steeper positive trend in the strip-bark chronology relative to the whole-bark chronology when standardizing with age-dependent splines. We hypothesize that localized warming on the southern side of stems due to solar irradiance results in physiological damage and dieback and leads to increasing tree-ring increment along the living portion of strip-bark trees. Because the impact of cambial dieback on ring widths likely varies depending on species and site, we suggest conducting a comparison of strip-bark and whole-bark ring widths before statistically treating ring-width data for climate reconstructions. 12. Senna singueana: Antioxidant, Hepatoprotective, Antiapoptotic Properties and Phytochemical Profiling of a Methanol Bark Extract Directory of Open Access Journals (Sweden) Mansour Sobeh 2017-09-01 Full Text Available Natural products are considered as an important source for the discovery of new drugs to treat aging-related degenerative diseases and liver injury. The present study profiled the chemical constituents of a methanol extract from Senna singueana bark using HPLC-PDA-ESI-MS/MS and 36 secondary metabolites were identified. Proanthocyanidins dominated the extract. Monomers, dimers, trimers of (epicatechin, (epigallocatechin, (epiguibourtinidol, (entcassiaflavan, and (epiafzelechin represented the major constituents. The extract demonstrated notable antioxidant activities in vitro: In DPPH (EC50 of 20.8 µg/mL, FRAP (18.16 mM FeSO4/mg extract assays, and total phenolic content amounted 474 mg gallic acid equivalent (GAE/g extract determined with the Folin-Ciocalteu method. Also, in an in vivo model, the extract increased the survival rate of Caenorhabditis elegans worms pretreated with the pro-oxidant juglone from 43 to 64%, decreased intracellular ROS inside the wild-type nematodes by 47.90%, and induced nuclear translocation of the transcription factor DAF-16 in the transgenic strain TJ356. Additionally, the extract showed a remarkable hepatoprotective activity against d-galactosamine (d-GalN induced hepatic injury in rats. It significantly reduced elevated AST (aspartate aminotransferase, and total bilirubin. Moreover, the extract induced a strong cytoplasmic Bcl-2 expression indicating suppression of apoptosis. In conclusion, the bark extract of S. sengueana represents an interesting candidate for further research in antioxidants and liver protection. 13. Bark-beetle infestation affects water quality in the Rocky Mountains of Colorado Science.gov (United States) Mikkelson, K.; Dickenson, E.; Maxwell, R. M.; McCray, J. E.; Sharp, J. O. 2012-12-01 In the previous decade, millions of acres in the Rocky Mountains of Colorado have been infested by the mountain pine beetle (MPB) leading to large-scale tree mortality. These vegetation changes can impact hydrological and biogeochemical processes, possibly altering the leaching of natural organic matter to surrounding waters and increasing the potential for harmful disinfection byproducts (DBP) during water treatments. To investigate these adverse outcomes, we have collected water quality data sets from local water treatment facilities in the Rocky Mountains of Colorado that have either been infested with MPB or remain a control. Results demonstrate significantly more total organic carbon (TOC) and DBPs in water treatment facilities receiving their source water from infested watersheds as compared to the control sites. Temporal DBP concentrations in MPB-watersheds also have increased significantly in conjunction with the bark-beetle infestation. Interestingly, only modest increases in TOC concentrations were observed in infested watersheds despite more pronounced increases in DBP concentrations. Total trihalomethanes, a heavily regulated DBP, was found to approach the regulatory limit in two out of four reporting quarters at facilities receiving their water from infested forests. These findings indicate that bark-beetle infestation alters TOC composition and loading in impacted watersheds and that this large-scale phenomenon has implications on the municipal water supply in the region. 14. BALANOCARPOL AND AMPELOPSIN H, TWO OLIGORESVERATROLS FROM STEM BARK OF Hopea odorata (DIPTEROCARPACEAE Directory of Open Access Journals (Sweden) Sri Atun 2010-06-01 Full Text Available Two oligoresveratrol, namely balanocarpol (2 and ampelopsin H (3 had been isolated from the steam bark of Hopea odorata (Dipterocarpaceae. The structures of these compounds were elucidated based on physical and spectroscopic data (MS, 1H and 13C NMR 1D and 2D. The activity of these compounds was evaluated against the 2-deoxyribose degradation induced by the hydroxyl radical generated via a Fenton-type reaction. The result showed that activity each compounds as radical hydroxyl scavenger of balanocarpol, and ampelopsin H with an IC50 1802.3 and 4840.0 μg/mL, respectively. Each compound showed low activity. Vitamin C (IC50 83.9 μg/mL and butylated hydroxyl toluene (1328.0 μg/mL were used as positif controls. These results suggest that oligoresveratrols from stem bark of H. odorata may be useful as potential sources of natural antioxidants. Keywords: balanocarpol, ampelopsin H, antioxidant, Dipterocarpaceae 15. A survey of public attitudes towards barking dogs in New Zealand. Science.gov (United States) Flint, E L; Minot, E O; Perry, P E; Stafford, K J 2014-11-01 To investigate public attitudes towards barking dogs in New Zealand in order to quantify the extent to which people perceive barking dogs to be a problem, to compare tolerance of barking with that of other common suburban noises, to assess the level of public understanding about the function of barking, to determine risk factors for intolerance of barking and to assess knowledge of possible strategies for the investigation and management of problem barking. A 12-page questionnaire was sent to 2,000 people throughout New Zealand randomly selected from the electoral roll. Risk factors for being bothered by barking were examined using logistic regression analysis. A total of 1,750 questionnaires were successfully delivered; of these, 727 (42%) were returned. Among respondents, 356/727 (49.0%) indicated that frequent barking during the day would bother them while 545/727 (75.0%) would be bothered by barking at night. Barking and howling were ranked above other suburban noises as a cause of annoyance. Risk factors for being bothered by daytime barking were not being home during the day, not owning a dog, and considering a dog bite to be a serious health risk. Risk factors for being bothered by night-time barking were not being home during the day, marital status, considering dog bites to pose a serious health risk, and having been frightened by a dog. Overall, 510/699 (73%) respondents understood that barking was a form of communication. Action likely to be taken by 666 respondents hearing frequent barking included notifying and offering to help the owner (119; 17.8%), complaining to the owner (127; 19.1%) or the authorities (121; 18.2%), or doing nothing (299; 48%). Possible responses by 211 dog owners if they had a barking dog included seeking help from dog trainers (59; 28%) or behaviourists (54; 26%), buying an anti-barking device (33; 15%) or getting rid of the dog (20; 10%). Barking was considered to be potentially disturbing by respondents to this survey 16. Uranium isotopes in tree bark as a spatial tracer of environmental contamination near former uranium processing facilities in southwest Ohio. Science.gov (United States) Conte, Elise; Widom, Elisabeth; Kuentz, David 2017-11-01 Inappropriate handling of radioactive waste at nuclear facilities can introduce non-natural uranium (U) into the environment via the air or groundwater, leading to anthropogenic increases in U concentrations. Uranium isotopic analyses of natural materials (e.g. soil, plants or water) provide a means to distinguish between natural and anthropogenic U in areas near sources of radionuclides to the environment. This study examines the utility of two different tree bark transects for resolving the areal extent of U atmospheric contamination using several locations in southwest Ohio that historically processed U. This study is the first to utilize tree bark sampling transects to assess environmental contamination emanating from a nuclear facility. The former Fernald Feed Materials Production Center (FFMPC; Ross, Ohio) produced U metal from natural U ores and recycled nuclear materials from 1951 to 1989. Alba Craft Laboratory (Oxford, Ohio) machined several hundred tons of natural U metal from the FFMPC between 1952 and 1957. The Herring-Hall-Marvin Safe Company (HHM; Hamilton, Ohio) intermittently fabricated slugs rolled from natural U metal stock for use in nuclear reactors from 1943 to 1951. We have measured U concentrations and isotope signatures in tree bark sampled along an ∼35 km SSE-NNW transect from the former FFMPC to the vicinity of the former Alba Craft laboratories (transect #1) and an ∼20 km SW- NE (prevailing local wind direction) transect from the FFMPC to the vicinity of the former HHM (transect #2), with a focus on old trees with thick, persistent bark that could potentially record a time-integrated signature of environmental releases of U related to anthropogenic activity. Our results demonstrate the presence of anthropogenic U contamination in tree bark from the entire study area in both transects, with U concentrations within 1 km of the FFMPC up to ∼400 times local background levels of 0.066 ppm. Tree bark samples from the Alba Craft and 17. Biosorptive behavior of mango (Mangifera indica) and neem (Azadirachta indica) barks for 134Cs from aqueous solutions. A radiotracer study International Nuclear Information System (INIS) Mishra, S.P.; Tiwari, D.; Prasad, S.K.; Dubey, R.S.; Mishra, M. 2007-01-01 The role of dead biomasses viz., mango (Mangifera indica) and neem (Azadirachta indica) bark samples are assessed in the removal behavior of, one of important fission fragments, Cs(I) from aqueous solutions employing a radiotracer technique. The batch type studies were carried out to obtain various physico-chemical data. It is to be noted that the increase in sorptive concentration (from 1.0 x 10 -8 to 1.0 x 10 -2 mol x dm -3 ), temperature (from 298 to 328 K) and pH (2.6 to 10.3) apparently favor the uptake of Cs(I) by these two bark samples. The concentration dependence data obeyed Freundlich adsorption isotherm and the uptake follows first order rate law. Thermodynamic data evaluation and desorption experiments reveal the adsorption to be irreversible and endothermic in nature proceeding through ion-exchange and surface complexation for both dead biomasses. Both bark samples showed a fairly good radiation stability in respect of adsorption uptake of Cs(I) when irradiated with a 300 mCi (Ra- Be) neutron source having an integral neutron flux of ∼ 3.85 x 10 6 n x cm -2 x s -1 and associated with a nominal γ-dose of ∼ 1.72 Gy x h -1 . (author) 18. Grammatical Constructions as Relational Categories. Science.gov (United States) Goldwater, Micah B 2017-07-01 This paper argues that grammatical constructions, specifically argument structure constructions that determine the "who did what to whom" part of sentence meaning and how this meaning is expressed syntactically, can be considered a kind of relational category. That is, grammatical constructions are represented as the abstraction of the syntactic and semantic relations of the exemplar utterances that are expressed in that construction, and it enables the generation of novel exemplars. To support this argument, I review evidence that there are parallel behavioral patterns between how children learn relational categories generally and how they learn grammatical constructions specifically. Then, I discuss computational simulations of how grammatical constructions are abstracted from exemplar sentences using a domain-general relational cognitive architecture. Last, I review evidence from adult language processing that shows parallel behavioral patterns with expert behavior from other cognitive domains. After reviewing the evidence, I consider how to integrate this account with other theories of language development. Copyright © 2017 Cognitive Science Society, Inc. 19. A Formal Calculus for Categories DEFF Research Database (Denmark) Cáccamo, Mario José This dissertation studies the logic underlying category theory. In particular we present a formal calculus for reasoning about universal properties. The aim is to systematise judgements about functoriality and naturality central to categorical reasoning. The calculus is based on a language which...... extends the typed lambda calculus with new binders to represent universal constructions. The types of the languages are interpreted as locally small categories and the expressions represent functors. The logic supports a syntactic treatment of universality and duality. Contravariance requires a definition...... of universality generous enough to deal with functors of mixed variance. Ends generalise limits to cover these kinds of functors and moreover provide the basis for a very convenient algebraic manipulation of expressions. The equational theory of the lambda calculus is extended with new rules for the definitions... 20. Seismic Category I Structures Program International Nuclear Information System (INIS) Endebrock, E.G.; Dove, R.C.; Anderson, C.A. 1984-01-01 The Seismic Category I Structures Program currently being carried out at the Los Alamos National Laboratory is sponsored by the Mechanical/Structural Engineering Branch, Division of Engineering Technology of the Nuclear Regulatory Commission (NRC). This project is part of a program designed to increase confidence in the assessment of Category I nuclear power plant structural behavior beyond the design limit. The program involves the design, construction, and testing of heavily reinforced concrete models of auxiliary buildings, fuel-handling buildings, etc., but doe not include the reactor containment building. The overall goal of the program is to supply to the Nuclear Regulatory Commission experimental information and a validated procedure to establish the sensitivity of the dynamic response of these structures to earthquakes of magnitude beyond the design basis earthquake 1. Different Categories of Business Risk Directory of Open Access Journals (Sweden) Simona-Valeria TOMA 2011-11-01 Full Text Available Every business organisation involves some element of risk. Unmitigated risks can result in lost opportunity, financial losses, loss of reputation, or loss of the right to operate in a jurisdiction. Like any other risk type, understanding business risks is quite important for every business to garner profits instead of facing losses. A business risk is a universal risk type; this means that every business in the world faces business risks. Therefore, it is imperative to understand the different categories of business risk in order to create the appropriate strategies. The aim of this paper is to describe the most important categories of business risks and to make sure that every type of risk receives equal treatment and consideration. 2. Virtue Ethics: The Misleading Category OpenAIRE Martha Nussbaum 1998-01-01 Virtue ethics is frequently considered to be a single category of ethical theory, and a rival to Kantianismand Utilitarianism. I argue that this approach is a mistake, because both Kantians and Utilitarians can, and do, have an interest in the virtues and the forrnation of character. But even if we focus on the group of ethical theorists who are most commonly called "virtue theorists" because they reject the guidance of both Kantianism and Utilitarianism, and derive inspiration from ancient G... 3. Virtue Ethics: The Misleading Category OpenAIRE Nussbaum, Martha 2013-01-01 Virtue ethics is frequently considered to be a single category of ethical theory, and a rival to Kantianismand Utilitarianism. I argue that this approach is a mistake, because both Kantians and Utilitarians can, and do, have an interest in the virtues and the forrnation of character. But even if we focus on the group of ethical theorists who are most commonly called "virtue theorists" because they reject the guidance of both Kantianism and Utilitarianism, and derive inspiration from ancient G... 4. 1999 who's who category index International Nuclear Information System (INIS) 1999-01-01 A classified index and alphabetical directory of Canadian corporate entities involved in the production, manufacturing, conversion, service, retail sales, research and development, transportation, insurance, legal and communications aspects of propane in Canada is provided. The alphabetical directory section provides the usual business information (name, postal address, phone, fax, e-mail and Internet addresses), names of principal officers, affiliations, products or services produced or marketed, and the category under which the company is listed in the classified index 5. Bark- and wood-borer colonization of logs and lumber after heat treatment to ISPM 15 specifications: the role of residual bark Science.gov (United States) Robert A. Haack; Toby R. Petrice 2009-01-01 Wood packaging material (WPM) is a major pathway for international movement of bark- and wood-infesting insects. ISPM 15, the first international standard for treating WPM, was adopted in 2002 and first implemented in the United States in 2006. ISPM 15 allows bark to remain on WPM after treatment, raising concerns that insects could infest after treatment, especially... 6. Self-organizing feature map (neural networks) as a tool to select the best indicator of road traffic pollution (soil, leaves or bark of Robinia pseudoacacia L.) Energy Technology Data Exchange (ETDEWEB) Samecka-Cymerman, A., E-mail: [email protected] [Department of Ecology, Biogeochemistry and Environmental Protection, Wroclaw University, ul. Kanonia 6/8, 50-328 Wroclaw (Poland); Stankiewicz, A.; Kolon, K. [Department of Ecology, Biogeochemistry and Environmental Protection, Wroclaw University, ul. Kanonia 6/8, 50-328 Wroclaw (Poland); Kempers, A.J. [Department of Environmental Sciences, Radboud University of Nijmegen, Toernooiveld, 6525 ED Nijmegen (Netherlands) 2009-07-15 Concentrations of the elements Cd, Co, Cr, Cu, Fe, Mn, Ni, Pb and Zn were measured in the leaves and bark of Robinia pseudoacacia and the soil in which it grew, in the town of Olesnica (SW Poland) and at a control site. We selected this town because emission from motor vehicles is practically the only source of air pollution, and it seemed interesting to evaluate its influence on soil and plants. The self-organizing feature map (SOFM) yielded distinct groups of soils and R. pseudoacacia leaves and bark, depending on traffic intensity. Only the map classifying bark samples identified an additional group of highly polluted sites along the main highway from Wroclaw to Warszawa. The bark of R. pseudoacacia seems to be a better bioindicator of long-term cumulative traffic pollution in the investigated area, while leaves are good indicators of short-term seasonal accumulation trends. - Once trained, SOFM could be used in the future to recognize types of pollution. 7. Anti-diarrhea activity of the aqueous root bark extract of Byrsocarpus coccineus on castor oil-induced diarrhea in Wistar rats. Science.gov (United States) Ejeh, Sunday A; Onyeyili, Patrick; Abalaka, Samson E 2017-07-01 The use of traditional medicine as an alternative source of cure for many ailments has played an important role in health care delivery in both developing and developed countries. Byrsocarpus coccineus Schum and Thonn ( Connaraceae ) is used in traditional medicine for treatment of various disease conditions, including diarrhea. The anti-diarrhea activity of the root bark aqueous extract of B. coccineus was investigated in this study. Acute toxicity evaluation of the aqueous extract of B. coccineus root bark was performed in exposed rats. Diarrhea was induced in exposed rats with castor oil, and the effect of the extract on castor oil-induced gastrointestinal motility and enteropooling was consequently investigated. In the acute toxicity study, the extract caused no death in treated rats nor produced signs of delayed toxicity, even at 5000 mg/kg. The aqueous root bark extract of B. coccineus also decreased the distance travelled by activated charcoal in the gastrointestinal tract of treated rats when compared to control rats. Results of castor oil-induced enteropooling revealed slight reduction in the weight of intestinal contents of treated rats compared to control rats. There was significant (pcastor oil-induced diarrhea at 100 mg/kg dose with 74.96% inhibition of defecation. The study demonstrated the anti-diarrheic property of the aqueous extract of B. coccineus root bark as currently exploited in our traditional herbal therapy. 8. Effect of Phenotypic Screening of Extracts and Fractions of Erythrophleum ivorense Leaf and Stem Bark on Immature and Adult Stages of Schistosoma mansoni Directory of Open Access Journals (Sweden) Gertrude Kyere-Davies 2018-01-01 Full Text Available Schistosomiasis is a disease caused by a flatworm parasite that infects people in tropical and subtropical regions of Sub-Saharan Africa, South America, China, and Southeast Asia. The reliance on just one drug for current treatment emphasizes the need for new chemotherapeutic strategies. The aim of this study was to determine the phenotypic effects of extracts and fractions of leaf and stem bark of Erythrophleum ivorense (family Euphorbiaceae, a tree that grows in tropical parts of Africa, on two developmental stages of Schistosoma mansoni, namely, postinfective larvae (schistosomula or somules and adults. Methanol leaf and stem bark extracts of E. ivorense were successively fractionated with acetone, petroleum ether, ethyl acetate, and methanol. These fractions were then incubated with somules at 0.3125 to 100 μg/mL and with adults at 1.25 μg/mL. The acetone fractions of both the methanol leaf and bark of E. ivorense were most active against the somules whereas the petroleum ether fractions showed least activity. For adult parasites, the acetone fraction of methanol bark extract also elicited phenotypic changes. The data arising provide the first step in the discovery of new treatments for an endemic infectious disease using locally sourced African medicinal plants. 9. A category of its own? DEFF Research Database (Denmark) Elklit, Jørgen; Roberts, Nigel S. 1996-01-01 of these systems on the proportionality of the representation of political parties are, indeed, comparable. The four electoral systems were the basis of their countries' general elections during 1994. The results of these elections are used for analyses and discussions of the relative importance of the differences......At first sight, the electoral systems in Denmark, Germany, South Africa and Sweden may seem different and attaempt to categorize them together odd. All four, however, belong to the same category, which Arend Lijphart calls 'proportional representation two-tier districting systems', and the effects... 10. Functional categories in comparative linguistics DEFF Research Database (Denmark) Rijkhoff, Jan , Roger M. 1979. Linguistic knowledge and cultural knowledge: some doubts and speculation. American Anthropologist 81-1, 14-36. Levinson, Stephen C. 1997. From outer to inner space: linguistic categories and non-linguistic thinking. In J. Nuyts and E. Pederson (eds.), Language and Conceptualization, 13......). Furthermore certain ‘ontological categories’ are language-specific (Malt 1995). For example, speakers of Kalam (New Guinea) do not classify the cassowary as a bird, because they believe it has a mythical kinship relation with humans (Bulmer 1967). In this talk I will discuss the role of functional... 11. 14 CFR 23.3 - Airplane categories. Science.gov (United States) 2010-01-01 ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Airplane categories. 23.3 Section 23.3... STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES General § 23.3 Airplane categories. (a) The normal category is limited to airplanes that have a seating configuration, excluding pilot... 12. Green Synthesis of Silver Nanoparticles Using Pinus eldarica Bark Extract Directory of Open Access Journals (Sweden) Siavash Iravani 2013-01-01 Full Text Available Recently, development of reliable experimental protocols for synthesis of metal nanoparticles with desired morphologies and sizes has become a major focus of researchers. Green synthesis of metal nanoparticles using organisms has emerged as a nontoxic and ecofriendly method for synthesis of metal nanoparticles. The objectives of this study were production of silver nanoparticles using Pinus eldarica bark extract and optimization of the biosynthesis process. The effects of quantity of extract, substrate concentration, temperature, and pH on the formation of silver nanoparticles are studied. TEM images showed that biosynthesized silver nanoparticles (approximately in the range of 10–40 nm were predominantly spherical in shape. The preparation of nano-structured silver particles using P. eldarica bark extract provides an environmentally friendly option, as compared to currently available chemical and/or physical methods. 13. Aspect as a Communicative Category DEFF Research Database (Denmark) Durst-Andersen, Per 2018-01-01 On the basis of internal evidence from primarily the use of imperfective forms and external evidence from primarily first language acquisition, it is argued that English, Russian, and French aspect differ from one another, because they go back to an obligatory choice among three possible communic......On the basis of internal evidence from primarily the use of imperfective forms and external evidence from primarily first language acquisition, it is argued that English, Russian, and French aspect differ from one another, because they go back to an obligatory choice among three possible...... communicative directions: should a grammatical category be grounded in the speaker's experience of a situation, in the situation referred to or in the hearer as information about the situation? The progressive vs. non-progressive distinction in English is acquired in the present tense of atelic (simplex) verbs...... to the meta-distinction between atelic (simplex) and telic (complex) verbs. It is second-person oriented. The specific order arrived at reflects the Peircean categories of Firstness, Secondness, and Thirdness and their predictions. This can account for the fact that the English and Russian types can be found... 14. Constituents from the bark resin of Schinus molle Directory of Open Access Journals (Sweden) Gonzalo Rodolfo Malca-García Full Text Available ABSTRACT A total of five terpenes was isolated from the bark resin of Schinus molle L., Anacardiaceae, and their structures were determined by spectroscopic techniques. Among these compounds the sesquiterpene hydrocarbon terebinthene showed significant growth inhibitory activity against human colon carcinoma HCT-116 cells. Furthermore, terebinthene and pinicolic acid (5 also showed antibacterial activity against Staphylococcus aureus ATCC 25923 and Bacillus subtilis ATCC 6633. 15. Phenolic glycosides from sugar maple (Acer saccharum) bark. Science.gov (United States) Yuan, Tao; Wan, Chunpeng; González-Sarrías, Antonio; Kandhi, Vamsikrishna; Cech, Nadja B; Seeram, Navindra P 2011-11-28 Four new phenolic glycosides, saccharumosides A-D (1-4), along with eight known phenolic glycosides, were isolated from the bark of sugar maple (Acer saccharum). The structures of 1-4 were elucidated on the basis of spectroscopic data analysis. All compounds isolated were evaluated for cytotoxicity effects against human colon tumorigenic (HCT-116 and Caco-2) and nontumorigenic (CCD-18Co) cell lines. 16. Coffee Berry Borer Joins Bark Beetles in Coffee Klatch Science.gov (United States) Jaramillo, Juliana; Torto, Baldwyn; Mwenda, Dickson; Troeger, Armin; Borgemeister, Christian; Poehling, Hans-Michael; Francke, Wittko 2013-01-01 Unanswered key questions in bark beetle-plant interactions concern host finding in species attacking angiosperms in tropical zones and whether management strategies based on chemical signaling used for their conifer-attacking temperate relatives may also be applied in the tropics. We hypothesized that there should be a common link in chemical signaling mediating host location by these Scolytids. Using laboratory behavioral assays and chemical analysis we demonstrate that the yellow-orange exocarp stage of coffee berries, which attracts the coffee berry borer, releases relatively high amounts of volatiles including conophthorin, chalcogran, frontalin and sulcatone that are typically associated with Scolytinae chemical ecology. The green stage of the berry produces a much less complex bouquet containing small amounts of conophthorin but no other compounds known as bark beetle semiochemicals. In behavioral assays, the coffee berry borer was attracted to the spiroacetals conophthorin and chalcogran, but avoided the monoterpenes verbenone and α-pinene, demonstrating that, as in their conifer-attacking relatives in temperate zones, the use of host and non-host volatiles is also critical in host finding by tropical species. We speculate that microorganisms formed a common basis for the establishment of crucial chemical signals comprising inter- and intraspecific communication systems in both temperate- and tropical-occurring bark beetles attacking gymnosperms and angiosperms. PMID:24073204 17. Solar radiation as a factor influencing the raid spruce bark beetle (Ips typographus) during spring swarming International Nuclear Information System (INIS) Mezei, P. 2011-01-01 Monitoring of spruce bark beetle in nature reserve Fabova hola Mountain in the Slovenske Rudohorie Mountains at an altitude of 1.100-1.440 meters was conducted from 2006 to 2009. Slovenske Rudohorie Mountains was affected by two windstorms (2004 and 2007) followed by a gradation of bark beetles. This article has examined the dependence between amount of solar radiation and trapping of spruce bark beetle into pheromone traps. 18. Oak Bark Allometry and Fire Survival Strategies in the Chihuahuan Desert Sky Islands, Texas, USA OpenAIRE Schwilk, Dylan W.; Gaetani, Maria S.; Poulos, Helen M. 2013-01-01 Trees may survive fire through persistence of above or below ground structures. Investment in bark aids in above-ground survival while investment in carbohydrate storage aids in recovery through resprouting and is especially important following above-ground tissue loss. We investigated bark allocation and carbohydrate investment in eight common oak (Quercus) species of Sky Island mountain ranges in west Texas. We hypothesized that relative investment in bark and carbohydrates changes with tre... 19. Relative enrichment of trace elements in atmospheric biomonitors - INAA results on tree bark and lichen thalli International Nuclear Information System (INIS) Pacheco, A.M.G.; Freitas, M.C.; Ventura, M.G 2002-01-01 Nuclear techniques, such as INAA and PIXE, are invaluable tools in environmental studies. Atmospheric biomonitoring, in particular, has been a preferential domain for their application, especially (yet not exclusively) due to their analytical robustness, minimal requirements as to sample preparation, and multi-elemental capabilities. The latter aspect is not just important for the complement they stand for each other, but also for the possibility of multiple determination, that may provide an in-depth picture of an elemental pool and, therefore, assist in data analysis, qualification and interpretation, even if some research had been originally designed to target specific, fewer elements. This paper addresses the relative magnitude of concentration patterns (by INAA) in epiphytic lichens (Parmelia spp.) and olive tree (Olea europaea Linn.) bark from an extended sampling in mainland Portugal, by looking at representative elements from natural and anthropogenic sources. Not seldom have higher plants been overlooked as indicators due to vascular and nutritional features, and also for supposedly yielding poorer analytical signals as a result of an inferior accumulation of airborne contaminants. A nonparametric assessment - correlation and sign trends - of raw and normalised (to a crustal reference) data has shown that while absolute concentrations are indeed (generally) higher in lichens, they also appear to be inflated by inputs from local circulation and/or re-suspension of previously deposited materials. On the contrary, the relative enrichment of non-crustal elements is almost invariably higher in bark than in lichens, which seems definitely at odds with the dim-accumulation scenario mentioned above. Even when the opposite occurs, the corresponding differences are non-significant but for Cl. Judging from these results, the question of signal magnitude - and the problem of biased atmospheric indication at large - could eventually stem more from the impact of soil 20. Content of certain mineral components in the thallus of lichens and the bark of roadside trees Directory of Open Access Journals (Sweden) Stanisława Kuziel 2015-01-01 Full Text Available The total N, P, Mg, Ca, K and Na contents were investigated in the thalli of several lichen species occurring on various trees, and in the bark and bark extracts from these trees. pH of the bark extracts was also determined. Wide differences were found in the content of the elements in point in the thalli of various lichen species on Acer platanoides and on the thalli of the same species on other trees. No relation was detected between the chemical composition of the bark and that of the lichen thalli occurring on it. 1. Carbon Impacts of Fire- and Bark Beetle-Caused Tree Mortality across the Western US using the Community Land Model (Invited) Science.gov (United States) Meddens, A. J.; Hicke, J. A.; Edburg, S. L.; Lawrence, D. M. 2013-12-01 Wildfires and bark beetle outbreaks cause major forest disturbances in the western US, affecting ecosystem productivity and thereby impacting forest carbon cycling and future climate. Despite the large spatial extent of tree mortality, quantifying carbon flux dynamics following fires and bark beetles over larger areas is challenging because of forest heterogeneity, varying disturbance severities, and field observation limitations. The objective of our study is to estimate these dynamics across the western US using the Community Land Model (version CLM4.5-BGC). CLM4.5-BGC is a land ecosystem model that mechanistically represents the exchanges of energy, water, carbon, and nitrogen with the atmosphere. The most recent iteration of the model has been expanded to include vertically resolved soil biogeochemistry and includes improved nitrogen cycle representations including nitrification and denitrification and biological fixation as well as improved canopy processes including photosynthesis. Prior to conducting simulations, we modified CLM4.5-BGC to include the effects of bark beetle-caused tree mortality on carbon and nitrogen stocks and fluxes. Once modified, we conducted paired simulations (with and without) fire- and bark beetle-caused tree mortality by using regional data sets of observed mortality as inputs. Bark beetle-caused tree mortality was prescribed from a data set derived from US Forest Service aerial surveys from 1997 to 2010. Annual tree mortality area was produced from observed tree mortality caused by bark beetles and was adjusted for underestimation. Fires were prescribed using the Monitoring Trends in Burn Severity (MTBS) database from 1984 to 2010. Annual tree mortality area was produced from forest cover maps and inclusion of moderate- and high-severity burned areas. Simulations show that maximum yearly reduction of net ecosystem productivity (NEP) caused by bark beetles is approximately 20 Tg C for the western US. Fires cause similar reductions 2. Potential nutritional and antioxidant activity of various solvent extracts from leaves and stem bark of Anisophyllea laurina R. Br ex Sabine used in folk medicine Directory of Open Access Journals (Sweden) Gbago Onivogui 2017-07-01 Full Text Available ABSTRACT Anisophyllea laurina is a plant that has been used in folk medicine to treat malaria, dysentery, diabetes, toothache and various skin diseases. Leaves extract had protein content of 9.68% and a high calcium content of 25084.317 mg/100 g while stem bark extract was found to contain greater amounts of calcium (8560.96 mg/100 g, potassium (7649.47 mg/100 g, magnesium (1462.49 mg/100 g and iron (973.33 mg/100 g. Palmitic acid, linolenic acid, linoleic acid and oleic acid were the most abundant fatty acids in leaves and stem bark extracts. Furthermore, total phenolic (2382.39 mg GAE /100 g and total flavonoid (385.79 mg QE/100 g contents were abundant in stem bark while leaves extract was rich in total tannin content (3466.63 mg CE/100 g. However, both leaves and stem bark contained great amounts of vitamins and amino acids were a good source of antioxidant activities. For the individual polyphenol, stenophyllanin A (45.87 mg/g, casuarinin (24.55 mg/g and digalloyl-HHDP-glucopyranose isomer (15.63 mg/g were found to be the major compounds from the leaves whereas procyanidin tetramer (14.89 mg/g, (--Epicatechin (12.18 mg/g and procyanidin trimer (11.25 mg/g were the most predominant compounds from the stem bark. Additionally, the results revealed a significant and strong correlation between phenolic compounds and antioxidant activities. 3. Standardization of radioactive waste categories International Nuclear Information System (INIS) 1970-01-01 A large amount of information about most aspects of radioactive waste management has been accumulated and made available to interested nations in recent years. The efficiency of this service has been somewhat hampered because the terminology used to describe the different types of radioactive waste has varied from country to country and indeed from installation to installation within a given country. This publication is the outcome of a panel meeting on Standardization of Radioactive Waste Categories. It presents a simple standard to be used as a common language between people working in the field of waste management at nuclear installations. The purpose of the standard is only to act as a practical tool for increasing efficiency in communicating, collecting and assessing technical and economical information in the common interest of all nations and the developing countries in particular. 20 refs, 1 fig., 3 tabs 4. Cryptically patterned moths perceive bark structure when choosing body orientations that match wing color pattern to the bark pattern. Directory of Open Access Journals (Sweden) Chang-Ku Kang Full Text Available Many moths have wing patterns that resemble bark of trees on which they rest. The wing patterns help moths to become camouflaged and to avoid predation because the moths are able to assume specific body orientations that produce a very good match between the pattern on the bark and the pattern on the wings. Furthermore, after landing on a bark moths are able to perceive stimuli that correlate with their crypticity and are able to re-position their bodies to new more cryptic locations and body orientations. However, the proximate mechanisms, i.e. how a moth finds an appropriate resting position and orientation, are poorly studied. Here, we used a geometrid moth Jankowskia fuscaria to examine i whether a choice of resting orientation by moths depends on the properties of natural background, and ii what sensory cues moths use. We studied moths' behavior on natural (a tree log and artificial backgrounds, each of which was designed to mimic one of the hypothetical cues that moths may perceive on a tree trunk (visual pattern, directional furrow structure, and curvature. We found that moths mainly used structural cues from the background when choosing their resting position and orientation. Our findings highlight the possibility that moths use information from one type of sensory modality (structure of furrows is probably detected through tactile channel to achieve crypticity in another sensory modality (visual. This study extends our knowledge of how behavior, sensory systems and morphology of animals interact to produce crypsis. 5. Cryptically patterned moths perceive bark structure when choosing body orientations that match wing color pattern to the bark pattern. Science.gov (United States) Kang, Chang-Ku; Moon, Jong-Yeol; Lee, Sang-Im; Jablonski, Piotr G 2013-01-01 Many moths have wing patterns that resemble bark of trees on which they rest. The wing patterns help moths to become camouflaged and to avoid predation because the moths are able to assume specific body orientations that produce a very good match between the pattern on the bark and the pattern on the wings. Furthermore, after landing on a bark moths are able to perceive stimuli that correlate with their crypticity and are able to re-position their bodies to new more cryptic locations and body orientations. However, the proximate mechanisms, i.e. how a moth finds an appropriate resting position and orientation, are poorly studied. Here, we used a geometrid moth Jankowskia fuscaria to examine i) whether a choice of resting orientation by moths depends on the properties of natural background, and ii) what sensory cues moths use. We studied moths' behavior on natural (a tree log) and artificial backgrounds, each of which was designed to mimic one of the hypothetical cues that moths may perceive on a tree trunk (visual pattern, directional furrow structure, and curvature). We found that moths mainly used structural cues from the background when choosing their resting position and orientation. Our findings highlight the possibility that moths use information from one type of sensory modality (structure of furrows is probably detected through tactile channel) to achieve crypticity in another sensory modality (visual). This study extends our knowledge of how behavior, sensory systems and morphology of animals interact to produce crypsis. 6. Evaluation of pine bark for treatment of water from biomass fueled plants; Utvaerdering av bark foer rening av vatten vid biobraensleeldade anlaeggningar Energy Technology Data Exchange (ETDEWEB) Hansson, Christina; Hansson, Helen; Hansson, Soeren [Carl Bro Energikonsult AB, Malmoe (Sweden) 2004-01-01 In Sweden, large amounts of pine bark are produced as a by-product from the pulp and forest industry. This makes pine bark available in large volumes to a relative low price. Pine bark has shown good absorption effect for organics pollutants, such as oil, in water and pine bark is used commercially as an oil absorbent. In a study the pine bark has also shown to have good absorption effects on heavy metals in water, in laboratory conditions. This indicates that pine bark also could be used as a natural absorbent for heavy metals in flue gas condensate and for leachate from biomass fuel storage. For the latter purpose the bark could be used as a combined heavy metal and oil absorber. In this project the pine barks ability to absorb heavy metals from flue gas condensate has been studied. The tests were performed using an untreated flue gas condensate, which was purified by using a basket filter with commercially available pine bark (trademark EcoBark) as absorbent. The bark filter has the same function as a tube reactor, which would imply that the absorption of heavy metals should be better than the laboratory tests. However, the results from the flue gas condensate tests showed much lower absorption of heavy metals than the laboratory tests. The only significant absorption levels were found for iron and mercury, which showed a reduction ratio of about 25 %. Other metals, such as lead, cadmium, copper, nickel, vanadium and zinc had a reduction ratio of about 10 %, which is quite low compared to the 98 % reduction for lead and about 80 % for copper and zinc that was achieved in the former laboratory tests. The most probable reason that the pine bark had a very low absorbent effect in the flue gas condensate is that the concentration of potassium and calcium restrains the ion exchange capacity of the pine bark. It is also likely that iron mainly is absorbed by the bark, while other metals only are separated as particles. Another possible reason for the rather poor 7. 40 CFR 98.190 - Definition of the source category. Science.gov (United States) 2010-07-01 .... (a) Lime manufacturing plants (LMPs) engage in the manufacture of a lime product (e.g., calcium oxide, high-calcium quicklime, calcium hydroxide, hydrated lime, dolomitic quicklime, dolomitic hydrate, or... kraft pulp mill, soda pulp mill, sulfite pulp mill, or only processes sludge containing calcium... 8. 40 CFR 98.250 - Definition of source category. Science.gov (United States) 2010-07-01 ...; asphalt blowing operations; blowdown systems; storage tanks; process equipment components (compressors... plants (i.e., hydrogen plants that are owned or under the direct control of the refinery owner and... 9. 40 CFR 98.270 - Definition of source category. Science.gov (United States) 2010-07-01 ... this section: (1) Chemical recovery furnaces at kraft and soda mills (including recovery furnaces that burn spent pulping liquor produced by both the kraft and semichemical process). (2) Chemical recovery... into paperboard products (e.g., containers), or operate coating and laminating processes. (b) The... 10. Color categories and color appearance Science.gov (United States) Webster, Michael A.; Kay, Paul 2011-01-01 We examined categorical effects in color appearance in two tasks, which in part differed in the extent to which color naming was explicitly required for the response. In one, we measured the effects of color differences on perceptual grouping for hues that spanned the blue–green boundary, to test whether chromatic differences across the boundary were perceptually exaggerated. This task did not require overt judgments of the perceived colors, and the tendency to group showed only a weak and inconsistent categorical bias. In a second case, we analyzed results from two prior studies of hue scaling of chromatic stimuli (De Valois, De Valois, Switkes, & Mahon, 1997; Malkoc, Kay, & Webster, 2005), to test whether color appearance changed more rapidly around the blue–green boundary. In this task observers directly judge the perceived color of the stimuli and these judgments tended to show much stronger categorical effects. The differences between these tasks could arise either because different signals mediate color grouping and color appearance, or because linguistic categories might differentially intrude on the response to color and/or on the perception of color. Our results suggest that the interaction between language and color processing may be highly dependent on the specific task and cognitive demands and strategies of the observer, and also highlight pronounced individual differences in the tendency to exhibit categorical responses. PMID:22176751 11. EU and Tourism Development: Bark or Bite? DEFF Research Database (Denmark) Halkier, Henrik 2010-01-01 In the absence of major programmes to strengthen the quality and competitiveness of European destinations, the role of the EU in tourism development has often been seen as fairly limited. Despite this, spill-overs or side effects from adjoining policy areas with extensive European regulation...... or intervention can be equally important, and the paper examines key aspects of the EU's role in tourism development in order to discuss to what extent the traditional interpretation of a passive actor of little consequence should be modified or even discarded. Drawing upon European and Nordic documentary sources...... as well as existing specialist literature, the text first examines the development of an EU policy statement on tourism, and then two areas of EU policy - competition policy and regional development - are analysed with a view to establishing side-effects in European and Nordic destinations... 12. The Micro-Category Account of Analogy Science.gov (United States) Green, Adam E.; Fugelsang, Jonathan A.; Kraemer, David J. M.; Dunbar, Kevin N. 2008-01-01 Here, we investigate how activation of mental representations of categories during analogical reasoning influences subsequent cognitive processing. Specifically, we present and test the central predictions of the "Micro-Category" account of analogy. This account emphasizes the role of categories in aligning terms for analogical mapping. In a… 13. Individual differences in attention during category learning NARCIS (Netherlands) Lee, M.D.; Wetzels, R. 2010-01-01 A central idea in many successful models of category learning—including the Generalized Context Model (GCM)—is that people selectively attend to those dimensions of stimuli that are relevant for dividing them into categories. We use the GCM to re-examine some previously analyzed category learning 14. Biomonitoring air pollution in the Czech Republic by means of tree bark International Nuclear Information System (INIS) Musilek, L.; Cechak, T.; Losinska, J.; Wolterbeek, H.Th. 2000-01-01 From the point of view of atmospheric pollution some parts of the Czech Republic rank among the most devastated areas in Europe. Heavy industry is the source of exhausts which, especially in North-West Bohemia, have made large pieces of the country nearly dead. Therefore, monitoring air pollution is one of the key questions in environmental studies in the country. Our survey intended to use similar methods like those used in the Netherlands at the end of the 80's, i.e., activation analysis of lichen Parmelia sulcata. However, preliminary investigations have shown that the proper lichens have disappeared in the most polluted areas. Therefore, tree bark has been chosen as a biomonitor. Both activation analysis in the IRI TUDelft and radionuclide X-ray fluorescence in the FNSPE CTU Prague have been used as the methods of trace element analysis. Some methodological remarks are summarised in the first part of the paper. The effort was directed towards optimising the method in the relatively complicated height profile of the Czech landscape. Finally, oak bark was chosen as the biomonitor; investigations of disturbing effects led to the conclusion that they were within the error of measurement. The second part of the paper is devoted to the results of application of the method to a specified area of the Czech Republic. This survey covered the area of nearly 40,000 square kilometres. It included the most important parts of the country from the point of view of atmospheric pollution. The evaluation of both the INAA and RXRFA results is still in progress. Nevertheless, some maps of relative distribution of air pollution over the monitored area can now be presented. They show that for some elements (sulphur, titanium) the range of the concentrations measured is extraordinarily high and that the situation in North-West Bohemia is really alarming. (author) 15. Semiochemical sabotage: behavioral chemicals for protection of western conifers from bark beetles Science.gov (United States) Nancy. E. Gillette; A. Steve Munson 2009-01-01 The discovery and elucidation of volatile behavioral chemicals used by bark beetles to locate hosts and mates has revealed a rich potential for humans to sabotage beetle host-finding and reproduction. Here, we present a description of currently available semiochemical methods for use in monitoring and controlling bark beetle pests in western conifer forests. Delivery... 16. Ethanol accumulation during severe drought may signal tree vulnerability to detection and attack by bark beetles Science.gov (United States) Rick G. Kelsey; D. Gallego; F.J. Sánchez-Garcia; J.A. Pajares 2014-01-01 Tree mortality from temperature-driven drought is occurring in forests around the world, often in conjunction with bark beetle outbreaks when carbon allocation to tree defense declines. Physiological metrics for detecting stressed trees with enhanced vulnerability prior to bark beetle attacks remain elusive. Ethanol, water, monoterpene concentrations, and composition... 17. 78 FR 4167 - Certain Electronic Bark Control Collars; Notice of Receipt of Complaint; Solicitation of Comments... Science.gov (United States) 2013-01-18 ... INTERNATIONAL TRADE COMMISSION [Docket No. 2932] Certain Electronic Bark Control Collars; Notice.... International Trade Commission. ACTION: Notice. SUMMARY: Notice is hereby given that the U.S. International Trade Commission has received a complaint entitled Certain Electronic Bark Control Collars, DN 2932; the... 18. Influence of temperature on spring flight initiation for southwestern ponderosa pine bark beetles (Coleoptera: Curculionidae, Scolytinae) Science.gov (United States) M. L. Gaylord; K. K. Williams; R. W. Hofstetter; J. D. McMillin; T. E. Degomez; M. R. Wagner 2008-01-01 Determination of temperature requirements for many economically important insects is a cornerstone of pest management. For bark beetles (Coleoptera: Curculionidae, Scolytinae), this information can facilitate timing of management strategies. Our goals were to determine temperature predictors for flight initiation of three species of Ips bark beetles... 19. Removal of Water-Soluble Extractives Improves the Enzymatic Digestibility of Steam-Pretreated Softwood Barks. Science.gov (United States) Frankó, Balázs; Carlqvist, Karin; Galbe, Mats; Lidén, Gunnar; Wallberg, Ola 2018-02-01 Softwood bark contains a large amounts of extractives-i.e., soluble lipophilic (such as resin acids) and hydrophilic components (phenolic compounds, stilbenes). The effects of the partial removal of water-soluble extractives before acid-catalyzed steam pretreatment on enzymatic digestibility were assessed for two softwood barks-Norway spruce and Scots pine. A simple hot water extraction step removed more than half of the water-soluble extractives from the barks, which improved the enzymatic digestibility of both steam-pretreated materials. This effect was more pronounced for the spruce than the pine bark, as evidenced by the 30 and 11% glucose yield improvement, respectively, in the enzymatic digestibility. Furthermore, analysis of the chemical composition showed that the acid-insoluble lignin content of the pretreated materials decreased when water-soluble extractives were removed prior to steam pretreatment. This can be explained by a decreased formation of water-insoluble "pseudo-lignin" from water-soluble bark phenolics during the acid-catalyzed pretreatment, which otherwise results in distorted lignin analysis and may also contribute to the impaired enzymatic digestibility of the barks. Thus, this study advocates the removal of extractives as the first step in the processing of bark or bark-rich materials in a sugar platform biorefinery. 20. Parasiticidal effects of Morus alba root bark extracts against Ichthyophthirius multifiliis infecting grass carp Science.gov (United States) Ichthyophthirius multifiliis (Ich) is an important fish parasite that can result in significant losses in aquaculture. In order to find efficacious drugs to control Ich, the root bark of Morus alba, a traditional Chinese medicine, was evaluated for its antiprotozoal activity. The M. alba root bark w... 1. Bark traits and life-history strategies of tropical dry- and moist forest trees NARCIS (Netherlands) Poorter, L.; McNeil, A.; Hurtado, V.H.; Prins, H.H.T.; Putz, F.E. 2014-01-01 1.Bark is crucial to trees because it protects their stems against fire and other hazards and because of its importance for assimilate transport, water relationships and repair. We evaluate size-dependent changes in bark thickness for 50 woody species from a moist forest and 50 species from a dry 2. Wood and bark anatomy of young beech in relation to Cryptococcus attack Science.gov (United States) David. Lonsdale 1983-01-01 Within a sample of European beech, partial resistance to attack by the beech scale, Cryptococcus fagisuga, was associated with a smooth bark which had a regular, vertical pattern in its surface 'growth lines'. Such bark contained relatively little lignified outer parenchyma, and the main stone cell layer was strongly developed. The '... 3. Whole-tree bark and wood properties of loblolly pine from intensively managed plantations Science.gov (United States) Finto Antony; Laurence R. Schimleck; Richard F. Daniels; Alexander Clark; Bruce E. Borders; Michael B. Kane; Harold E. Burkhart 2015-01-01 A study was conducted to identify geographical variation in loblolly pine bark and wood properties at the whole-tree level and to quantify the responses in whole-tree bark and wood properties following contrasting silvicultural practices that included planting density, weed control, and fertilization. Trees were destructively sampled from both conventionally managed... 4. Development of molecular tools for use in beech bark disease management Science.gov (United States) Jennifer L. Koch; David W. Carey; Mary E. Mason; C. Dana Nelson; Abdelali Barakat; John E. Carlson; David. Neale 2011-01-01 Beech bark disease (BBD) has been killing American beech trees in eastern North America since the late 1890s. The disease is initiated by feeding of the beech scale insect, Cryptococcus fagisuga, which leads to the development of small fissures in the bark. 5. Bundles of C*-categories and duality OpenAIRE Vasselli, Ezio 2005-01-01 We introduce the notions of multiplier C*-category and continuous bundle of C*-categories, as the categorical analogues of the corresponding C*-algebraic notions. Every symmetric tensor C*-category with conjugates is a continuous bundle of C*-categories, with base space the spectrum of the C*-algebra associated with the identity object. We classify tensor C*-categories with fibre the dual of a compact Lie group in terms of suitable principal bundles. This also provides a classification for ce... 6. Phytochemical Analysis and Biological Activities of Cola nitida Bark Directory of Open Access Journals (Sweden) Durand Dah-Nouvlessounon 2015-01-01 Full Text Available Kola nut is chewed in many West African cultures and is used ceremonially. The aim of this study is to investigate some biological effects of Cola nitida’s bark after phytochemical screening. The bark was collected, dried, and then powdered for the phytochemical screening and extractions. Ethanol and ethyl acetate extracts of C. nitida were used in this study. The antibacterial activity was tested on ten reference strains and 28 meat isolated Staphylococcus strains by disc diffusion method. The antifungal activity of three fungal strains was determined on the Potato-Dextrose Agar medium mixed with the appropriate extract. The antioxidant activity was determined by DPPH and ABTS methods. Our data revealed the presence of various potent phytochemicals. For the reference and meat isolated strains, the inhibitory diameter zone was from 17.5±0.7 mm (C. albicans to 9.5±0.7 mm (P. vulgaris. The MIC ranged from 0.312 mg/mL to 5.000 mg/mL and the MBC from 0.625 mg/mL to >20 mg/mL. The highest antifungal activity was observed with F. verticillioides and the lowest one with P. citrinum. The two extracts have an excellent reducing free radical activity. The killing effect of A. salina larvae was perceptible at 1.04 mg/mL. The purified extracts of Cola nitida’s bark can be used to hold meat products and also like phytomedicine. 7. Effects of sulfur dioxide pollution on bark epiphytes Energy Technology Data Exchange (ETDEWEB) Coker, P D 1967-01-01 The destructive effects of sulfur dioxide pollution on epiphytic bryophytes is seen to be due to chlorophyll degradation and the impairment of cell structure and function through plasmolysis. Morphological changes noted by Pearson and Skye (1965) in lichens were not seen, although stunting and infertility are evident in epiphyte remnants in polluted areas. The investigation of the ion exchange and buffer capacities of sycamore bark indicates a loss of both in approximate proportion to the degree of pollution. Smoke and aerosol particles are not considered to be of particular importance at the present time although they may well have been important in the past. 8. Chemical Constituents from Stem Bark and Roots of Clausena anisata Directory of Open Access Journals (Sweden) Etienne Dongo 2012-11-01 Full Text Available Phytochemical investigations on the stem bark and roots of the tropical shrub Clausena anisata led to the isolation and characterization three carbazole alkaloids: girinimbine, murrayamine-A and ekeberginine; two peptide derivatives: aurantiamide acetate and N-benzoyl-l-phenylalaninyl-N-benzoyl-l-phenylalaninate; and a mixture of two phytosterols: sitosterol and stigmasterol. The structures of these compounds were established by nuclear magnetic resonance (1H-NMR, 13C-NMR, COSY, HSQC, HMQC, HMBC and NOESY spectroscopy and electrospray ionization mass spectrometry (MS. 9. Flavonoid Compounds from the Bark of Aglaia eximia (Meliaceae) OpenAIRE Julinton Sianturi; Mayshah Purnamasari; Tri Mayanti; Desi Harneti; Unang Supratman; Khalijah Awang; Hideo Hayashi 2015-01-01 Three flavonoid compounds, kaempferol (1), kaempferol-3-O-α-L-rhamnoside (2), and kaempferol-3-O-β-D-glucosyl-α-L-rhamnoside (3), were isolated from the bark of Aglaia eximia (Meliaceae). The chemical structures of compounds 1–3 were identified with spectroscopic data, including UV, IR, NMR (1H, 13C, DEPT 135°, HMQC, HMBC, 1H-1H-COSY NMR), and MS, as well as a compared with previously reported spectra data. All compounds were evaluated for their cytotoxic effects against P-388 murine leukemia... 10. Removal of chromium (vi) by using eucalyptus bark (biosorption) International Nuclear Information System (INIS) Khatoon, S.; Anwar, J.; Fatima, H.B. 2009-01-01 Adsorption of Chromium (VI) on the Eucalyptus bark has been studied with variation in parameters. Different parameters like particle size of adsorbent, concentration of adsorbate, amount of adsorbent, stirring speed, time, temperature and pH were studied. The adsorption has been carried out in batch process. The adsorption capacity increases with decreasing the particle size of adsorbent. The optimum conditions for the maximum adsorption are attained with 2.0 g of adsorbent, 40 ppm metal ion concentration, at room temperature (10 degree C), with 90 min contact time, with 300 rpm agitation speed and at pH 2. (author) 11. Calotroposide S, New Oxypregnane Oligoglycoside from Calotropis procera Root Bark Directory of Open Access Journals (Sweden) Sabrin R. M. Ibrahim 2016-05-01 Full Text Available Calotroposide S (1, a new oxypregnane oligoglycoside has been isolated from the n-butanol fraction of Calotropis procera (Ait R. Br. root bark. The structure of 1 was assigned based on various spectroscopic analyses. Calotroposide S (1 possesses the 12-O-benzoylisolineolon aglycone moiety with eight sugar residues attached to C-3 of the aglycone. It showed potent anti-proliferative activity towards PC-3 prostate cancer, A549 non-small cell lung cancer (NSCLC, and U373 glioblastoma (GBM cell lines with IC 50 0.18, 0.2, and 0.06 µM, respectively compared with cisplatin and carboplatin. 12. Cytotoxic Constituents from bark and leaves of Amyris pinnata Kunth. Directory of Open Access Journals (Sweden) Luis Enrique Cuca-Suarez 2015-04-01 Full Text Available From leaves and bark of Amyris pinnata Kunth twelve compounds were isolated, corresponding to six lignans 1-6, three coumarins 7-9, a sesquiterpene 10, an oxazole alkaloid 11, and a prenylated flavonoid 12,. Metabolites were identified by spectroscopic techniques ( 1H and 13C NMR, EIMS and by comparison with published data in the literature. C ytotoxicity against leukemia, solid tumors, and normal cells was evaluated for all isolated compounds. Lignans were found to be the most cytotoxic compounds occurring in A. pinnata. 13. Chemical constituents from bark of Cenostigma macrophyllum: cholesterol occurrence International Nuclear Information System (INIS) Silva, Hilris Rocha e; Silva, Carmem Cicera Maria da; Caland Neto, Laurentino Batista; Lopes, Jose Arimateia Dantas; Cito, Antonia Maria das Gracas Lopes; Chaves, Mariana H. 2007-01-01 Phytochemical investigation of the bark of Cenostigma macrophyllum (Leguminosae-Caesapinioideae) resulted in the isolation and identification of valoneic acid dilactone, ellagic acid, lupeol, alkyl ferulate, four free sterols (cholesterol, campesterol, stigmasterol and sitosterol), a mixture of sitosteryl ester derivatives of fatty acids, sitosterol-3-O-beta-D-glucopyranoside, stigmasterol-3-O-beta-D-glucopyranoside and saturated and unsaturated fatty acids. The structures of the isolated compounds were identified by 1 H and 13 C NMR spectral analysis and comparison with literature data. The mixtures of 3-beta-hydroxysterols and fatty acids were analysed by GC/MS. (author) 14. In vivo antinociceptive and muscle relaxant activity of leaf and bark of Buddleja asiatica L. Science.gov (United States) Barkatullah, -; Ibrar, Muhammad; Ikram, Nazia; Rauf, Abdur; Hadda, Taibi Ben; Bawazeer, Saud; Khan, Haroon; Pervez, Samreen 2016-09-01 The current study was designed to assess the antinociceptive and skeleton muscle relaxant effect of leaves and barks of Buddleja asiatica in animal models. In acetic acid induced writhing test, pretreatment of ethanolic extract of leaves and barks evoked marked dose dependent antinociceptive effect with maximum of 70% and 67% pain relief at 300mg/kg i.p. respectively. In chimney test, the ethanolic extract of leaves and barks evoked maximum of 66.66% and 53.33% muscle relaxant effect after 90min of treatment at 300mg/kg i.p respectively. In traction test, the ethanolic extract of leaves and barks caused maximum of 60% and 73.33% muscle relaxant effect after 90min of treatment at 300mg/kg i.p respectively. In short, both leaves and barks demonstrated profound antinociceptive and skeleton muscle relaxant effects and thus the study provided natural healing agents for the treatment of said disorders. 15. Studies on the efficacy of Bridelia ferruginea Benth bark extract for domestic wastewater treatment Directory of Open Access Journals (Sweden) O.M. Kolawole 2007-08-01 Full Text Available The efficacy of Bridelia ferruginea Benth bark extract in wastewater treatment was investigated. Chemical analysis found the bark to contain potassium, sodium, calcium, magnesium, zinc, manganese, iron and copper. Phytochemical tests revealed the bark to contain tannins, phlobatannins, saponins, alkaloids, and steroids. Comparative studies using varying concentrations (0.5, 1.0, 2.5 and 5.0 % w/v with alum and ferric chloride showed that the bark extract was effective in the clarification and sedimentation of total solids in the waste water sample. The optimum dose achieved was 2.5 % w/v with a minimum of 24 hours contact time. The total bacteria counts were reduced by 46 % after 24 hours when the extract was used whereas ferric chloride achieved 50 % reduction and alum achieved 55 % reduction under similar conditions. The feasibility of using the bark extract as an additional coagulant is therefore discussed. 16. PRECEDENCE AS A PSYCHOLINGUISTIC CATEGORY Directory of Open Access Journals (Sweden) Panarina Nadezhda Sergeevna 2015-06-01 . In summary, any speech act assumes particular correlation and content of meaning components. Presence of culturological component in meaning structure represents specific nature of speech activity structural elements. Therefore, precedence is a psycholinguistic category, which must be considered taking into account structural features of a particular speech activity. 17. Procedural-Based Category Learning in Patients with Parkinson's Disease: Impact of Category Number and Category Continuity Directory of Open Access Journals (Sweden) J. Vincent eFiloteo 2014-02-01 Full Text Available Previously we found that Parkinson's disease (PD patients are impaired in procedural-based category learning when category membership is defined by a nonlinear relationship between stimulus dimensions, but these same patients are normal when the rule is defined by a linear relationship (Filoteo et al., 2005; Maddox & Filoteo, 2001. We suggested that PD patients' impairment was due to a deficit in recruiting ‘striatal units' to represent complex nonlinear rules. In the present study, we further examined the nature of PD patients' procedural-based deficit in two experiments designed to examine the impact of (1 the number of categories, and (2 category discontinuity on learning. Results indicated that PD patients were impaired only under discontinuous category conditions but were normal when the number of categories was increased from two to four. The lack of impairment in the four-category condition suggests normal integrity of striatal medium spiny cells involved in procedural-based category learning. In contrast, and consistent with our previous observation of a nonlinear deficit, the finding that PD patients were impaired in the discontinuous condition suggests that these patients are impaired when they have to associate perceptually distinct exemplars with the same category. Theoretically, this deficit might be related to dysfunctional communication among medium spiny neurons within the striatum, particularly given that these are cholinergic neurons and a cholinergic deficiency could underlie some of PD patients’ cognitive impairment. 18. Feature-Based versus Category-Based Induction with Uncertain Categories Science.gov (United States) Griffiths, Oren; Hayes, Brett K.; Newell, Ben R. 2012-01-01 Previous research has suggested that when feature inferences have to be made about an instance whose category membership is uncertain, feature-based inductive reasoning is used to the exclusion of category-based induction. These results contrast with the observation that people can and do use category-based induction when category membership is… 19. Cultural categories of energy use International Nuclear Information System (INIS) Barnett, S. 1983-06-01 Energy use patterns and attitudes are tied to more general cultural patterns, not simply to an economic or engineering rationality. To illustrate this, sharply contrasting use patterns and attitudes toward nuclear energy, coal and alternative forms of energy will be compared for India and the United States. This comparison, focussing on different perceptions of risks associated with developing energy technologies, future energy requirements, environmental concerns, and conservation patterns, will be used to develop a more extensive discussion of the above energy sources in western Europe and South America. The talk will conclude with some conjectures about the implications of cultural differences and political environments for world energy development patterns in the proximate future. Likely cultural and political influences on the direction of energy use (efficiency, rates of adoption of new technology), and energy needs for industrialization and reindustrialization will be described for North America, western Europe, and selected LDC's 20. Synthesis of silver nanoparticles using medicinal Zizyphus xylopyrus bark extract Science.gov (United States) Sumi Maria, Babu; Devadiga, Aishwarya; Shetty Kodialbail, Vidya; Saidutta, M. B. 2015-08-01 In the present paper, biosynthesis of silver nanoparticles using Zizyphus xylopyrus bark extract is reported. Z. xylopyrus bark extract is efficiently used for the biosynthesis of silver nanoparticles. UV-Visible spectroscopy showed surface plasmon resonance peaks in the range 413-420 nm confirming the formation of silver nanoparticles. Different factors affecting the synthesis of silver nanoparticles like methodology for the preparation of extract, concentration of silver nitrate solution used for biosynthesis and initial pH of the reaction mixture were studied. The extract prepared with 10 mM AgNO3 solution by reflux extraction method at optimum initial pH of 11, resulted in higher conversion of silver ions to silver nanoparticles as compared with those prepared by open heating or ultrasonication. SEM analysis showed that the biosynthesized nanoparticles are spherical in nature and ranged from 60 to 70 nm in size. EDX suggested that the silver nanoparticles must be capped by the organic components present in the plant extract. This simple process for the biosynthesis of silver nanoparticles using aqueous extract of Z. xylopyrus is a green technology without the usage of hazardous and toxic solvents and chemicals and hence is environment friendly. The process has several advantages with reference to cost, compatibility for its application in medical and drug delivery, as well as for large-scale commercial production. 1. STUDIES ON SOME PHYSICOCHEMICAL PROPERTIES OF LEUCAENA LEUCOCEPHALA BARK GUM Directory of Open Access Journals (Sweden) Vijetha Pendyala 2010-06-01 Full Text Available Gum exudates from Leucaena Leucocephala (Family: Fabaceae plants grown all over India were investigated for its physicochemical properties such as pH, swelling capacity and viscosities at different temperatures using standard methods. Leucaena Leucocephala bark gum appeared to be colorless to reddish brown translucent tears. 5 % w/v mucilage has pH of 7.5 at 28°C. The gum is slightly soluble in water and practically insoluble in ethanol, acetone and chloroform. It swells to about 5 times its original weight in water. A 5 %w/v mucilage concentration gave a viscosity value which was unaffected at temperature ranges (28-40°C. At concentrations of 2 and 5 %w/v, the gum exhibited pseudo plastic flow pattern while at 10 %w/v concentration the flow behaviour was thixotropic. The results indicate that the swelling ability of Leucaena Leucocephala (LL bark gum may provide potentials for its use as a disintegrant in tablet formulation, as a hydro gel in modified release dosage forms and the rheological flow properties may also provide potentials for its use as suspending and emulsifying agents owing to its pseudo plastic and thixotropic flow patterns. 2. Biological factors contributing to bark and ambrosia beetle species diversification. Science.gov (United States) Gohli, Jostein; Kirkendall, Lawrence R; Smith, Sarah M; Cognato, Anthony I; Hulcr, Jiri; Jordal, Bjarte H 2017-05-01 The study of species diversification can identify the processes that shape patterns of species richness across the tree of life. Here, we perform comparative analyses of species diversification using a large dataset of bark beetles. Three examined covariates-permanent inbreeding (sibling mating), fungus farming, and major host type-represent a range of factors that may be important for speciation. We studied the association of these covariates with species diversification while controlling for evolutionary lag on adaptation. All three covariates were significantly associated with diversification, but fungus farming showed conflicting patterns between different analyses. Genera that exhibited interspecific variation in host type had higher rates of species diversification, which may suggest that host switching is a driver of species diversification or that certain host types or forest compositions facilitate colonization and thus allopatric speciation. Because permanent inbreeding is thought to facilitate dispersal, the positive association between permanent inbreeding and diversification rates suggests that dispersal ability may contribute to species richness. Bark beetles are ecologically unique; however, our results indicate that their impressive species diversity is largely driven by mechanisms shown to be important for many organism groups. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution. 3. Acute and subacute toxicity of Schinus terebinthifolius bark extract. Science.gov (United States) Lima, L B; Vasconcelos, C F B; Maranhão, H M L; Leite, V R; Ferreira, P A; Andrade, B A; Araújo, E L; Xavier, H S; Lafayette, S S L; Wanderley, A G 2009-12-10 Schinus terebinthifolius Raddi (Anacardiaceae) has long been used in traditional Brazilian medicine, especially to treat inflammatory and haemostatic diseases. The objective of this study was to evaluate the acute and subacute toxicity (45 days) of Schinus terebinthifolius via the oral route in Wistar rats of both sexes. For the acute toxicity test, the dried extract of Schinus terebinthifolius bark was administered in doses from 0.625 to 5.0 g/kg (n=5/group/sex) and in the subacute toxicity test the following doses were used: 0.25, 0.625 and 1.5625 g/kg/day (n=13/group/sex), for 45 consecutive days. In the acute toxicity test, Schinus terebinthifolius did not produce any toxic signs or deaths. The subacute treatment with Schinus terebinthifolius did not alter either the body weight gain or the food and water consumption. The hematological and biochemical analysis did not show significant differences in any of the parameters examined in female or male groups, except in two male groups, in which the treatment with Schinus terebinthifolius (0.25 and 0.625 g/kg) induced an increase of mean corpuscular volume values (2.9 and 2.6%, respectively). These variations are within the physiological limits described for the specie and does not have clinical relevance. The acute and subacute administration of the dried extract of Schinus terebinthifolius bark did not produced toxic effects in Wistar rats. 4. TV MEDIA ANALYSIS FOR BANKING CATEGORY (2012) OpenAIRE Alexandra Elena POȘTOACĂ; Dorian – Laurențiu FLOREA 2014-01-01 This article represents a short overview of the media landscape for the banking category in Romania in 2012. Unlike the other categories (for example FMCG – fast moving consumer goods), the banking category is more complex because every bank can communicate for a wider range of products (credits, deposits, packages dedicated to students, pensioners and other types of banking products). In the first part of this paper, there will be presented some theoretical notions about media planning a... 5. Misremembering emotion: Inductive category effects for complex emotional stimuli. Science.gov (United States) Corbin, Jonathan C; Crawford, L Elizabeth; Vavra, Dylan T 2017-07-01 Memories of objects are biased toward what is typical of the category to which they belong. Prior research on memory for emotional facial expressions has demonstrated a bias towards an emotional expression prototype (e.g., slightly happy faces are remembered as happier). We investigate an alternate source of bias in memory for emotional expressions - the central tendency bias. The central tendency bias skews reconstruction of a memory trace towards the center of the distribution for a particular attribute. This bias has been attributed to a Bayesian combination of an imprecise memory for a particular object with prior information about its category. Until now, studies examining the central tendency bias have focused on simple stimuli. We extend this work to socially relevant, complex, emotional facial expressions. We morphed facial expressions on a continuum from sad to happy. Different ranges of emotion were used in four experiments in which participants viewed individual expressions and, after a variable delay, reproduced each face by adjusting a morph to match it. Estimates were biased toward the center of the presented stimulus range, and the bias increased at longer memory delays, consistent with the Bayesian prediction that as trace memory loses precision, category knowledge is given more weight. The central tendency effect persisted within and across emotion categories (sad, neutral, and happy). This article expands the scope of work on inductive category effects to memory for complex, emotional stimuli. 6. Spatio-Temporal Distribution of Bark and Ambrosia Beetles in a Brazilian Tropical Dry Forest. Science.gov (United States) Macedo-Reis, Luiz Eduardo; Novais, Samuel Matos Antunes de; Monteiro, Graziela França; Flechtmann, Carlos Alberto Hector; Faria, Maurício Lopes de; Neves, Frederico de Siqueira 2016-01-01 Bark and the ambrosia beetles dig into host plants and live most of their lives in concealed tunnels. We assessed beetle community dynamics in tropical dry forest sites in early, intermediate, and late successional stages, evaluating the influence of resource availability and seasonal variations in guild structure. We collected a total of 763 beetles from 23 species, including 14 bark beetle species, and 9 ambrosia beetle species. Local richness of bark and ambrosia beetles was estimated at 31 species. Bark and ambrosia composition was similar over the successional stages gradient, and beta diversity among sites was primarily determined by species turnover, mainly in the bark beetle community. Bark beetle richness and abundance were higher at intermediate stages; availability of wood was the main spatial mechanism. Climate factors were effectively non-seasonal. Ambrosia beetles were not influenced by successional stages, however the increase in wood resulted in increased abundance. We found higher richness at the end of the dry and wet seasons, and abundance increased with air moisture and decreased with higher temperatures and greater rainfall. In summary, bark beetle species accumulation was higher at sites with better wood production, while the needs of fungi (host and air moisture), resulted in a favorable conditions for species accumulation of ambrosia. The overall biological pattern among guilds differed from tropical rain forests, showing patterns similar to dry forest areas. © The Author 2016. Published by Oxford University Press on behalf of the Entomological Society of America. 7. Development and characterization of ice cream enriched with different formulations flour jabuticaba bark (Myrciaria cauliflora Directory of Open Access Journals (Sweden) Marina Leopoldina Lamounier 2015-09-01 Full Text Available The aim was to perform the physicochemical characterization of the flour from the bark of jabuticaba, as well as developing three ice cream formulations (enriched with 0, 5 and 10% of this flour and evaluate the physicochemical and sensory characteristics. Fruits were pulped, the peels were dehydrated, dried, crushed and sieved to obtain the flour that was analyzed for physicochemical levels. Then, three ice cream formulations were developed (with 0%, 5% and 10% flour from the bark of jabuticaba, considering the physicochemical and sensorial characteristics. The results showed that the flour from the bark of jabuticaba showed high ash and fiber. The ice creams showed differences (p < 0.05 for pH, titratable acidity, moisture and ash due to the incorporation of flour from the bark of jabuticaba. The only attribute that did not differ (p > 0.05 was soluble solid. The overrun was ecreasing with increasing addition of flour. In the sensory evaluation, only attributes that differ (p < 0.05 were flavor, texture and overall appearance of the formulation with 10% flour from the bark of jabuticaba, which represents that incorporation of 5% flour from the bark of jabuticaba did not affect the cceptability of ice creams. It can be concluded that the enrichment of blemish bark flour provides edible ice increase in nutritional value without affecting the sensory characteristics at the level of 5% added. 8. Polyphenolic Composition and Antioxidant Activity of Aqueous and Ethanolic Extracts from Uncaria tomentosa Bark and Leaves. Science.gov (United States) Navarro-Hoyos, Mirtha; Alvarado-Corella, Diego; Moreira-Gonzalez, Ileana; Arnaez-Serrano, Elizabeth; Monagas-Juan, Maria 2018-05-11 Uncaria tomentosa constitutes an important source of secondary metabolites with diverse biological activities mainly attributed until recently to alkaloids and triterpenes. We have previously reported for the first-time the polyphenolic profile of extracts from U. tomentosa , using a multi-step process involving organic solvents, as well as their antioxidant capacity, antimicrobial activity on aerial bacteria, and cytotoxicity on cancer cell lines. These promising results prompted the present study using food grade solvents suitable for the elaboration of commercial extracts. We report a detailed study on the polyphenolic composition of aqueous and ethanolic extracts of U. tomentosa bark and leaves ( n = 16), using High Performance Liquid Chromatography coupled with Mass Spectrometry (HPLC-DAD/TQ-ESI-MS). A total of 32 compounds were identified, including hydroxybenzoic and hydroxycinnamic acids, flavan-3-ols monomers, procyanidin dimers and trimers, flavalignans⁻cinchonains and propelargonidin dimers. Our findings showed that the leaves were the richest source of total phenolics and proanthocyanidins, in particular propelargonidin dimers. Two-way Analysis of Variance (ANOVA) indicated that the contents of procyanidin and propelargonidin dimers were significantly different ( p rich in proanthocyanidins and exhibiting high antioxidant activity. 9. Cork oak vulnerability to fire: the role of bark harvesting, tree characteristics and abiotic factors. Directory of Open Access Journals (Sweden) Filipe X Catry Full Text Available Forest ecosystems where periodical tree bark harvesting is a major economic activity may be particularly vulnerable to disturbances such as fire, since debarking usually reduces tree vigour and protection against external agents. In this paper we asked how cork oak Quercus suber trees respond after wildfires and, in particular, how bark harvesting affects post-fire tree survival and resprouting. We gathered data from 22 wildfires (4585 trees that occurred in three southern European countries (Portugal, Spain and France, covering a wide range of conditions characteristic of Q. suber ecosystems. Post-fire tree responses (tree mortality, stem mortality and crown resprouting were examined in relation to management and ecological factors using generalized linear mixed-effects models. Results showed that bark thickness and bark harvesting are major factors affecting resistance of Q. suber to fire. Fire vulnerability was higher for trees with thin bark (young or recently debarked individuals and decreased with increasing bark thickness until cork was 3-4 cm thick. This bark thickness corresponds to the moment when exploited trees are debarked again, meaning that exploited trees are vulnerable to fire during a longer period. Exploited trees were also more likely to be top-killed than unexploited trees, even for the same bark thickness. Additionally, vulnerability to fire increased with burn severity and with tree diameter, and was higher in trees burned in early summer or located in drier south-facing aspects. We provided tree response models useful to help estimating the impact of fire and to support management decisions. The results suggested that an appropriate management of surface fuels and changes in the bark harvesting regime (e.g. debarking coexisting trees in different years or increasing the harvesting cycle would decrease vulnerability to fire and contribute to the conservation of cork oak ecosystems. 10. Cork oak vulnerability to fire: the role of bark harvesting, tree characteristics and abiotic factors. Science.gov (United States) Catry, Filipe X; Moreira, Francisco; Pausas, Juli G; Fernandes, Paulo M; Rego, Francisco; Cardillo, Enrique; Curt, Thomas 2012-01-01 Forest ecosystems where periodical tree bark harvesting is a major economic activity may be particularly vulnerable to disturbances such as fire, since debarking usually reduces tree vigour and protection against external agents. In this paper we asked how cork oak Quercus suber trees respond after wildfires and, in particular, how bark harvesting affects post-fire tree survival and resprouting. We gathered data from 22 wildfires (4585 trees) that occurred in three southern European countries (Portugal, Spain and France), covering a wide range of conditions characteristic of Q. suber ecosystems. Post-fire tree responses (tree mortality, stem mortality and crown resprouting) were examined in relation to management and ecological factors using generalized linear mixed-effects models. Results showed that bark thickness and bark harvesting are major factors affecting resistance of Q. suber to fire. Fire vulnerability was higher for trees with thin bark (young or recently debarked individuals) and decreased with increasing bark thickness until cork was 3-4 cm thick. This bark thickness corresponds to the moment when exploited trees are debarked again, meaning that exploited trees are vulnerable to fire during a longer period. Exploited trees were also more likely to be top-killed than unexploited trees, even for the same bark thickness. Additionally, vulnerability to fire increased with burn severity and with tree diameter, and was higher in trees burned in early summer or located in drier south-facing aspects. We provided tree response models useful to help estimating the impact of fire and to support management decisions. The results suggested that an appropriate management of surface fuels and changes in the bark harvesting regime (e.g. debarking coexisting trees in different years or increasing the harvesting cycle) would decrease vulnerability to fire and contribute to the conservation of cork oak ecosystems. 11. Cork Oak Vulnerability to Fire: The Role of Bark Harvesting, Tree Characteristics and Abiotic Factors Science.gov (United States) Catry, Filipe X.; Moreira, Francisco; Pausas, Juli G.; Fernandes, Paulo M.; Rego, Francisco; Cardillo, Enrique; Curt, Thomas 2012-01-01 Forest ecosystems where periodical tree bark harvesting is a major economic activity may be particularly vulnerable to disturbances such as fire, since debarking usually reduces tree vigour and protection against external agents. In this paper we asked how cork oak Quercus suber trees respond after wildfires and, in particular, how bark harvesting affects post-fire tree survival and resprouting. We gathered data from 22 wildfires (4585 trees) that occurred in three southern European countries (Portugal, Spain and France), covering a wide range of conditions characteristic of Q. suber ecosystems. Post-fire tree responses (tree mortality, stem mortality and crown resprouting) were examined in relation to management and ecological factors using generalized linear mixed-effects models. Results showed that bark thickness and bark harvesting are major factors affecting resistance of Q. suber to fire. Fire vulnerability was higher for trees with thin bark (young or recently debarked individuals) and decreased with increasing bark thickness until cork was 3–4 cm thick. This bark thickness corresponds to the moment when exploited trees are debarked again, meaning that exploited trees are vulnerable to fire during a longer period. Exploited trees were also more likely to be top-killed than unexploited trees, even for the same bark thickness. Additionally, vulnerability to fire increased with burn severity and with tree diameter, and was higher in trees burned in early summer or located in drier south-facing aspects. We provided tree response models useful to help estimating the impact of fire and to support management decisions. The results suggested that an appropriate management of surface fuels and changes in the bark harvesting regime (e.g. debarking coexisting trees in different years or increasing the harvesting cycle) would decrease vulnerability to fire and contribute to the conservation of cork oak ecosystems. PMID:22787521 12. Biomonitoring of airborne inorganic and organic pollutants by means of pine tree barks. II. Deposition types and impact levels International Nuclear Information System (INIS) Schulz, H.; Schulz, U.; Huhn, G.; Schuermann, G. 2000-01-01 A total of 273 pine bark samples collected from various pine stands in Central and East Germany, South Norway, Poland, and Russia was analyzed with respect to 20 inorganic and organic substances (sulphate, nitrate, ammonia, calcium, 3 PAHs, 5 heavy metals, 9 other elements). Multivariate statistics were applied to characterize the multiple exposure of airborne pollutants in terms of major sources, deposition types and impact levels. The former was studied with factor analysis, whilst the latter two were addressed by applying cluster and discrimination analysis. Factor analysis of the concentration values suggest separation into three factors with the following characteristics: Factor 1 shows higher contributions from sulphate and calcium, factor 2 from fluoranthene, benzo(a)pyrene as well as from pyrene, and factor 3 from nitrate and ammonia, respectively. According to results from the cluster analysis, three major deposition types can be identified: 'Industry and House heating', 'Motor traffic', and 'Agriculture'. The first deposition type is characterized by high contents of sulphate and calcium. The other two deposition types contain specific composition profiles for nitrogen-containing components and PAHs. Impact levels are separately classified with the characteristic variables of main deposition types. Finally, discriminant analysis is used to allocate new bark samples to the classified deposition types and impact levels. The results demonstrate the usefulness of multivariate statistical techniques to characterize and evaluate multiple exposure patterns of airborne pollutants in forest ecosystems. (author) 13. Studies on the biocidal and cell membrane disruption potentials of stem bark extracts of Afzelia africana (Smith Directory of Open Access Journals (Sweden) DAVID A AKINPELU 2009-01-01 Full Text Available We had recently reported antibacterial activity in the crude extract of the stem bark of Afzelia africana (Akinpelu et al., 2008. In this study, we assessed the biocidal and cell membrane disruption potentials of fractions obtained from the crude extract of the plant. The aqueous (AQ and butanol (BL fractions exhibited appreciable antibacterial activities against the test bacteria. The minimum inhibitory concentrations of the AQ and BL fractions ranged between 0.313 and 2.5 mg/ml, while their minimum bactericidal concentrations varied between 0.625 and 5.0 mg/ml. Also, the AQ fraction killed about 95.8% of E. coli cells within 105 min at a concentration of 5 mg/ml, while about 99.1% of Bacillus pumilus cells were killed by this fraction at the same concentration and exposure time. A similar trend was observed for the BL fraction. At a concentration of 5 mg/ml, the butanol fraction leaked 9.8 μg/ml of proteins from E. coli cells within 3 h, while the aqueous fraction leaked 6.5 μg/ml of proteins from the same organisms at the same concentration and exposure time. We propose that the stem bark of Afzelia africana is a potential source of bioactive compounds of importance to the pharmaceutical industry. 14. Tannin analysis of chestnut bark samples (Castanea sativa Mill.) by HPLC-DAD-MS. Science.gov (United States) Comandini, Patrizia; Lerma-García, María Jesús; Simó-Alfonso, Ernesto Francisco; Toschi, Tullia Gallina 2014-08-15 In the present investigation, an HPLC-DAD/ESI-MS method for the complete analysis of tannins and other phenolic compounds of different commercial chestnut bark samples was developed. A total of seven compounds (vescalin, castalin, gallic acid, vescalagin, 1-O-galloyl castalagin, castalagin and ellagic acid) were separated and quantified, being 1-O-galloyl castalagin tentatively identified and found for the first time in chestnut bark samples. Thus, this method provided information regarding the composition and quality of chestnut bark samples, which is required since these samples are commercialised due to their biochemical properties as ingredients of food supplements. Copyright © 2014 Elsevier Ltd. All rights reserved. 15. Energy capacity of black wattle wood and bark in different spacing plantations Directory of Open Access Journals (Sweden) Elder Eloy 2015-06-01 Full Text Available The study aimed at the energetic description of wood and bark biomass of Acacia mearnsii De Wild. in two spacing plantations: 2.0 m × 3.0 m × 1.0 m and 1.5 m, during 36 months after the planting. The experiment was conducted in the municipality of Frederico Westphalen, state of Rio Grande do Sul, Brazil. Biomass (BIO, calorific value, basic density, ash content, volatile matter and fixed carbon content and energy density (ED of wood and bark were determined. The smallest spacing plantation presented the highest production per unit area of BIO and ED of wood and bark. 16. An Efficient, Robust, and Inexpensive Grinding Device for Herbal Samples like Cinchona Bark. Science.gov (United States) Hansen, Steen Honoré; Holmfred, Else; Cornett, Claus; Maldonado, Carla; Rønsted, Nina 2015-01-01 An effective, robust, and inexpensive grinding device for the grinding of herb samples like bark and roots was developed by rebuilding a commercially available coffee grinder. The grinder was constructed to be able to provide various particle sizes, to be easy to clean, and to have a minimum of dead volume. The recovery of the sample when grinding as little as 50 mg of crude Cinchona bark was about 60%. Grinding is performed in seconds with no rise in temperature, and the grinder is easily disassembled to be cleaned. The influence of the particle size of the obtained powders on the recovery of analytes in extracts of Cinchona bark was investigated using HPLC. 17. A Potential Tool for Swift Fox (Vulpes velox) Conservation: Individuality of Long-Range Barking Sequences DEFF Research Database (Denmark) Darden, Safi-Kirstine Klem; Dabelsteen, Torben; Pedersen, Simon Boel 2003-01-01 Vocal individuality has been found in a number canid species. This natural variation can have applications in several aspects of species conservation, from behavioral studies to estimating population density or abundance. The swift fox (Vulpes velox) is a North American canid listed as endangered...... in Canada and extirpated, endangered, or threatened in parts of the United States. The barking sequence is a long-range vocalization in the species' vocal repertoire. It consists of a series of barks and is most common during the mating season. We analyzed barking sequences recorded in a standardized... 18. Investigating tree bark as an air-pollution biomonitor by means of neutron activation analysis International Nuclear Information System (INIS) Pacheco, A.M.G.; Figueira, R. 2001-01-01 The olive tree (Olea europaea) is an icon of southern Europe and a widespread evergreen in mainland Portugal. First results of a continuing study on the ability of olive-tree bark to act as an air-pollution biomonitor are presented and discussed here. Other than lower signals and an anticipated systemic control over some elements, there seems to be no a priori reason for ruling out the possibility of using bark in atmospheric trace-element surveys. In fact, nonparametric statistics show that, despite their relative magnitude, the variation patterns of bark and lichen concentrations significantly follow one another all across the study area. (author) 19. Biased Allocation of Faces to Social Categories NARCIS (Netherlands) Dotsch, R.; Wigboldus, D.H.J.; Knippenberg, A.F.M. van 2011-01-01 Three studies show that social categorization is biased at the level of category allocation. In all studies, participants categorized faces. In Studies 1 and 2, participants overallocated faces with criminal features-a stereotypical negative trait-to the stigmatized Moroccan category, especially if 20. The ethnic category from a linguistic perspective Directory of Open Access Journals (Sweden) Răzvan Săftoiu 2017-03-01 Full Text Available In this paper, I put forward an analysis from a linguistic perspective of an ethnic category in Romania that is defined by at least two terms: gypsy and Romany. The concept of category refers to the members of a particular group that sets apart from other groups by a set of specific elements acknowledged at the level of a larger community. In interaction, individuals frequently use categories and the set of features that a certain category is characterized by, since it is easier to deal with sets of knowledge than with references for each individual separately. The analysis is based on a series of expressions and phrases, proverbs and jokes which were (or still are getting about in the Romanian space and which delineated, at the level of the collective mentality, the image of an ethnic category whose name (still oscillates between two terms. The texts were grouped depending on the different stereotypes associated with the ethnic category under discussion, by highlighting the pejorative connotations of the uses of the term gypsy in relation to the ethnic category Romany, a significance-free category that can be ‘filled up’ by elements that can sketch a positive image. 1. Shape configuration and category-specificity DEFF Research Database (Denmark) Gerlach, Christian; Law, Ian; Paulson, Olaf B. 2006-01-01 a recent account of category-specificity and lends support to the notion that category-specific impairments can occur for both natural objects and artefacts following damage to pre-semantic stages in visual object recognition. The implications of the present findings are discussed in relation to theories... 2. Conformal field theories and tensor categories. Proceedings Energy Technology Data Exchange (ETDEWEB) Bai, Chengming [Nankai Univ., Tianjin (China). Chern Institute of Mathematics; Fuchs, Juergen [Karlstad Univ. (Sweden). Theoretical Physics; Huang, Yi-Zhi [Rutgers Univ., Piscataway, NJ (United States). Dept. of Mathematics; Kong, Liang [Tsinghua Univ., Beijing (China). Inst. for Advanced Study; Runkel, Ingo; Schweigert, Christoph (eds.) [Hamburg Univ. (Germany). Dept. of Mathematics 2014-08-01 First book devoted completely to the mathematics of conformal field theories, tensor categories and their applications. Contributors include both mathematicians and physicists. Some long expository articles are especially suitable for beginners. The present volume is a collection of seven papers that are either based on the talks presented at the workshop ''Conformal field theories and tensor categories'' held June 13 to June 17, 2011 at the Beijing International Center for Mathematical Research, Peking University, or are extensions of the material presented in the talks at the workshop. These papers present new developments beyond rational conformal field theories and modular tensor categories and new applications in mathematics and physics. The topics covered include tensor categories from representation categories of Hopf algebras, applications of conformal field theories and tensor categories to topological phases and gapped systems, logarithmic conformal field theories and the corresponding non-semisimple tensor categories, and new developments in the representation theory of vertex operator algebras. Some of the papers contain detailed introductory material that is helpful for graduate students and researchers looking for an introduction to these research directions. The papers also discuss exciting recent developments in the area of conformal field theories, tensor categories and their applications and will be extremely useful for researchers working in these areas. 3. Color descriptors for object category recognition NARCIS (Netherlands) van de Sande, K.E.A.; Gevers, T.; Snoek, C.G.M. 2008-01-01 Category recognition is important to access visual information on the level of objects. A common approach is to compute image descriptors first and then to apply machine learning to achieve category recognition from annotated examples. As a consequence, the choice of image descriptors is of great 4. Operadic categories and duoidal Deligne's conjecture Czech Academy of Sciences Publication Activity Database Batanin, M.; Markl, Martin 2015-01-01 Roč. 285, 5 November (2015), s. 1630-1687 ISSN 0001-8708 Institutional support: RVO:67985840 Keywords : operadic category * duoidal category * Deligne's conjecture Subject RIV: BA - General Mathematics Impact factor: 1.405, year: 2015 http://www.sciencedirect.com/science/article/pii/S0001870815002467 5. Conformal field theories and tensor categories. Proceedings International Nuclear Information System (INIS) Bai, Chengming; Fuchs, Juergen; Huang, Yi-Zhi; Kong, Liang; Runkel, Ingo; Schweigert, Christoph 2014-01-01 First book devoted completely to the mathematics of conformal field theories, tensor categories and their applications. Contributors include both mathematicians and physicists. Some long expository articles are especially suitable for beginners. The present volume is a collection of seven papers that are either based on the talks presented at the workshop ''Conformal field theories and tensor categories'' held June 13 to June 17, 2011 at the Beijing International Center for Mathematical Research, Peking University, or are extensions of the material presented in the talks at the workshop. These papers present new developments beyond rational conformal field theories and modular tensor categories and new applications in mathematics and physics. The topics covered include tensor categories from representation categories of Hopf algebras, applications of conformal field theories and tensor categories to topological phases and gapped systems, logarithmic conformal field theories and the corresponding non-semisimple tensor categories, and new developments in the representation theory of vertex operator algebras. Some of the papers contain detailed introductory material that is helpful for graduate students and researchers looking for an introduction to these research directions. The papers also discuss exciting recent developments in the area of conformal field theories, tensor categories and their applications and will be extremely useful for researchers working in these areas. 6. Connections between realcompactifications in various categories ... African Journals Online (AJOL) The author gives a detailed analysis of the relation between the theories of realcompactications and compactications in the category of ditopological texture spaces and in the categories of bitopological spaces and topological spaces. Keywords: Bitopology, texture, ditopology, Stone-Čech compactication, Hewitt real- ... 7. Finding biomedical categories in Medline® Directory of Open Access Journals (Sweden) Yeganova Lana 2012-10-01 Full Text Available Abstract Background There are several humanly defined ontologies relevant to Medline. However, Medline is a fast growing collection of biomedical documents which creates difficulties in updating and expanding these humanly defined ontologies. Automatically identifying meaningful categories of entities in a large text corpus is useful for information extraction, construction of machine learning features, and development of semantic representations. In this paper we describe and compare two methods for automatically learning meaningful biomedical categories in Medline. The first approach is a simple statistical method that uses part-of-speech and frequency information to extract a list of frequent nouns from Medline. The second method implements an alignment-based technique to learn frequent generic patterns that indicate a hyponymy/hypernymy relationship between a pair of noun phrases. We then apply these patterns to Medline to collect frequent hypernyms as potential biomedical categories. Results We study and compare these two alternative sets of terms to identify semantic categories in Medline. We find that both approaches produce reasonable terms as potential categories. We also find that there is a significant agreement between the two sets of terms. The overlap between the two methods improves our confidence regarding categories predicted by these independent methods. Conclusions This study is an initial attempt to extract categories that are discussed in Medline. Rather than imposing external ontologies on Medline, our methods allow categories to emerge from the text. 8. Appropriate Pupilness: Social Categories Intersecting in School Science.gov (United States) Kofoed, Jette 2008-01-01 The analytical focus in this article is on how social categories intersect in daily school life and how intersections intertwine with other empirically relevant categories such as normality, pupilness and (in)appropriatedness. The point of empirical departure is a daily ritual where teams for football are selected. The article opens up for a… 9. Diagnostic Categories in Autobiographical Accounts of Illness. Science.gov (United States) Kelly, Michael P 2015-01-01 Working within frameworks drawn from the writings of Immanuel Kant, Alfred Schutz, and Kenneth Burke, this article examines the role that diagnostic categories play in autobiographical accounts of illness, with a special focus on chronic disease. Four lay diagnostic categories, each with different connections to formal medical diagnostic categories, serve as typifications to make sense of the way the lifeworld changes over the course of chronic illness. These diagnostic categories are used in conjunction with another set of typifications: lay epidemiologies, lay etiologies, lay prognostics, and lay therapeutics. Together these serve to construct and reconstruct the self at the center of the lifeworld. Embedded within the lay diagnostic categories are narratives of progression, regression, or stability, forms of typification derived from literary and storytelling genres. These narratives are developed by the self in autobiographical accounts of illness. 10. Modular categories and 3-manifold invariants International Nuclear Information System (INIS) Tureav, V.G. 1992-01-01 The aim of this paper is to give a concise introduction to the theory of knot invariants and 3-manifold invariants which generalize the Jones polynomial and which may be considered as a mathematical version of the Witten invariants. Such a theory was introduced by N. Reshetikhin and the author on the ground of the theory of quantum groups. here we use more general algebraic objects, specifically, ribbon and modular categories. Such categories in particular arise as the categories of representations of quantum groups. The notion of modular category, interesting in itself, is closely related to the notion of modular tensor category in the sense of G. Moore and N. Seiberg. For simplicity we restrict ourselves in this paper to the case of closed 3-manifolds 11. SUSTAIN: a network model of category learning. Science.gov (United States) Love, Bradley C; Medin, Douglas L; Gureckis, Todd M 2004-04-01 SUSTAIN (Supervised and Unsupervised STratified Adaptive Incremental Network) is a model of how humans learn categories from examples. SUSTAIN initially assumes a simple category structure. If simple solutions prove inadequate and SUSTAIN is confronted with a surprising event (e.g., it is told that a bat is a mammal instead of a bird), SUSTAIN recruits an additional cluster to represent the surprising event. Newly recruited clusters are available to explain future events and can themselves evolve into prototypes-attractors-rules. SUSTAIN's discovery of category substructure is affected not only by the structure of the world but by the nature of the learning task and the learner's goals. SUSTAIN successfully extends category learning models to studies of inference learning, unsupervised learning, category construction, and contexts in which identification learning is faster than classification learning. 12. Methodological Aspects of Depreciation as an Economic Category OpenAIRE Sigidov, Yuriy I.; Rybyantseva, Maria S.; Adamenko, Alexandr A.; Yarushkina, Elena A. 2016-01-01 Depreciation is a complex economic category, the essence of which is manifested in the duality: this cost element, and its own source of reproduction of fixed assets and intangible assets. The depreciation laid relationship with asset and liability balance sheet; it touches on aspects such as formation costs, taxation issues, and reproductive process. That is why a methodological study of the depreciation essence, the allocation of the classification of bases, principles and functions seems u... 13. Social Vision: Visual cues communicate categories to observers OpenAIRE Johnson, Kerri L 2009-01-01 This information ranges from appreciating category membership to evaluating more enduring traits and dispositions. These aspects of social perception appear to be highly automated, some would even call them obligatory, and they are heavily influenced by two sources of information: the face and the body. From minimal information such as brief exposure to the face or degraded images of dynamic body motion, social judgments are made with remarkable efficiency and, at times, surprising accuracy. 14. Can height categories replace weight categories in striking martial arts competitions? A pilot study. Science.gov (United States) Dubnov-Raz, Gal; Mashiach-Arazi, Yael; Nouriel, Ariella; Raz, Raanan; Constantini, Naama W 2015-09-29 In most combat sports and martial arts, athletes compete within weight categories. Disordered eating behaviors and intentional pre-competition rapid weight loss are commonly seen in this population, attributed to weight categorization. We examined if height categories can be used as an alternative to weight categories for competition, in order to protect the health of athletes. Height and weight of 169 child and adolescent competitive karate athletes were measured. Participants were divided into eleven hypothetical weight categories of 5 kg increments, and eleven hypothetical height categories of 5 cm increments. We calculated the coefficient of variation of height and weight by each division method. We also calculated how many participants fit into corresponding categories of both height and weight, and how many would shift a category if divided by height. There was a high correlation between height and weight (r = 0.91, p<0.001). The mean range of heights seen within current weight categories was reduced by 83% when participants were divided by height. When allocating athletes by height categories, 74% of athletes would shift up or down one weight category at most, compared with the current categorization method. We conclude that dividing young karate athletes by height categories significantly reduced the range of heights of competitors within the category. Such categorization would not cause athletes to compete against much heavier opponents in most cases. Using height categories as a means to reduce eating disorders in combat sports should be further examined. 15. Condensed tannins from the bark of Guazuma ulmifolia Lam. (Sterculiaceae) Energy Technology Data Exchange (ETDEWEB) Lopes, Gisely C.; Rocha, Juliana C.B.; Mello, Joao C.P. de [Universidade Estadual de Maringa (UEM), PR (Brazil). Programa de Pos-graduacao em Ciencias Farmaceuticas], e-mail: [email protected]; Almeida, Glalber C. de [Universidade Estadual de Maringa (UEM), PR (Brazil) 2009-07-01 From the bark of Guazuma ulmifolia Lam. (Sterculiaceae), nine compounds were isolated and identified: ent-catechin, epicatechin, ent-gallocatechin, epigallocatechin, epiafzelechin-(4{beta}?8)-epicatechin, epicatechin-(4{beta}?8)-catechin (procyanidin B1), epicatechin-(4{beta}?8)-epicatechin (procyanidin B2), epicatechin-(4{beta}?8)-epigallocatechin, and the new compound 4'-O-methyl-epiafzelechin. Their structures were elucidated on the basis of spectral and literature data. HPLC fingerprint analysis of the semipurified extract was performed on a C18 column, with a mixture of acetonitrile (0.05% trifluoroacetic acid):water (0.05% trifluoroacetic acid) (v/v) with a flow rate of 0.8 mL min-1. The sample injection volume was 100 {mu}L and the wavelength was 210 nm. (author) 16. Flavonoid Compounds from the Bark of Aglaia eximia (Meliaceae Directory of Open Access Journals (Sweden) Julinton Sianturi 2015-03-01 Full Text Available Three flavonoid compounds, kaempferol (1, kaempferol-3-O-α-L-rhamnoside (2, and kaempferol-3-O-β-D-glucosyl-α-L-rhamnoside (3, were isolated from the bark of Aglaia eximia (Meliaceae. The chemical structures of compounds 1–3 were identified with spectroscopic data, including UV, IR, NMR (1H, 13C, DEPT 135°, HMQC, HMBC, 1H-1H-COSY NMR, and MS, as well as a compared with previously reported spectra data. All compounds were evaluated for their cytotoxic effects against P-388 murine leukemia cells. Compounds 1–3 showed cytotoxicity against P-388 murine leukemia cells with IC50 values of 1.22, 42.92, and >100 mg/mL, respectively 17. Installations of SNCR on bark-fired boilers International Nuclear Information System (INIS) Hjalmarsson, A.K.; Hedin, K.; Andersson, Lars 1997-01-01 Experience has been collected from the twelve bark-fired boilers in Sweden with selective non catalytic reduction (SNCR) installations to reduce emissions of nitrogen oxides. Most of the boilers have slope grates, but there are also two boilers with cyclone ovens and two fluidized bed boilers. In addition to oil there are also possibilities to burn other fuel types in most boilers, such as sludge from different parts of the pulp and paper mills, saw dust and wood chips. The SNCR installations seems in general to be of simple design. In most installations the injection nozzles are located in existing holes in the boiler walls. The availability is reported to be good from several of the SNCR installations. There has been tube leakage in several boilers. The urea system has resulted in corrosion and in clogging of one oil burner. This incident has resulted in a decision not to use SNCR system with the present design of the system. The fuel has also caused operational problems with the SNCR system in several of the installations due to variations in the moisture content and often high moisture content in bark and sludge, causing temperature variations. The availability is presented to be high for the SNCR system at several of the plants, in two of them about 90 %. The results in NO x reduction vary between the installations depending on boiler, fuel and operation. The emissions are between 45 and 100 mg NO 2 /MJ fuel input and the NO x reduction rates are in most installations between 30 and 40 %, the lowest 20 and the highest 70 %. 13 figs, 3 tabs 18. Observation versus classification in supervised category learning. Science.gov (United States) Levering, Kimery R; Kurtz, Kenneth J 2015-02-01 The traditional supervised classification paradigm encourages learners to acquire only the knowledge needed to predict category membership (a discriminative approach). An alternative that aligns with important aspects of real-world concept formation is learning with a broader focus to acquire knowledge of the internal structure of each category (a generative approach). Our work addresses the impact of a particular component of the traditional classification task: the guess-and-correct cycle. We compare classification learning to a supervised observational learning task in which learners are shown labeled examples but make no classification response. The goals of this work sit at two levels: (1) testing for differences in the nature of the category representations that arise from two basic learning modes; and (2) evaluating the generative/discriminative continuum as a theoretical tool for understand learning modes and their outcomes. Specifically, we view the guess-and-correct cycle as consistent with a more discriminative approach and therefore expected it to lead to narrower category knowledge. Across two experiments, the observational mode led to greater sensitivity to distributional properties of features and correlations between features. We conclude that a relatively subtle procedural difference in supervised category learning substantially impacts what learners come to know about the categories. The results demonstrate the value of the generative/discriminative continuum as a tool for advancing the psychology of category learning and also provide a valuable constraint for formal models and associated theories. 19. Ameliorative Activity of Ethanolic Extract of Artocarpus heterophyllus Stem Bark on Alloxan-induced Diabetic Rats Directory of Open Access Journals (Sweden) Basiru Olaitan Ajiboye 2018-03-01 Full Text Available Purpose: Diabetes mellitus is one of the major endocrine disorders, characterized by impaired insulin action and deficiency. Traditionally, Artocarpus heterophyllus stem bark has been reputably used in the management of diabetes mellitus and its complications. The present study evaluates the ameliorative activity of ethanol extract of Artocarpus heterophyllus stem bark in alloxan-induced diabetic rats. Methods: Diabetes mellitus was induced by single intraperitoneal injection of 150 mg/kg body weight of alloxan and the animals were orally administered with 50, 100 and 150 mg/kg body weight ethanol extract of Artocarpus heterophyllus stem bark once daily for 21 days. Results: At the end of the intervention, diabetic control rats showed significant (p0.05 different with non-diabetic rats. Conclusion: The results suggest that ethanol extract of Artocarpus heterophyllus stem bark may be useful in ameliorating complications associated with diabetes mellitus patients. 20. Phoretic mites of three bark beetles (Pityokteines spp.) on silver fir Science.gov (United States) Milan Pernek; Boris Hrasovec; Dinka Matosevic; Ivan Pilas; Thomas Kirisits; John C. Moser 2008-01-01 The species composition and abundance of phoretic mites of the bark beetles Pityokteines curvidens P. spinidens, and P. vorontzowi on Silver fir (Abies alba) were investigated in 2003 at two locations (Trakoscan and Litoric) in Croatia. Stem sections and... 1. Acidity of tree bark as a bioindicator of forest pollution in southern Poland Energy Technology Data Exchange (ETDEWEB) Grodzinska, K 1977-05-01 pH values and buffering capacity were determined for bark samples of five deciduous trees (oak, alder, hornbeam, ash, linden), one shrub (hazel) and one coniferous tree (scots pine) in the Cracow Industrial Region (southern Poland) and, for comparison, in the Bialowieza Forest (northeastern Poland). The correlation was found between acidification of tree bark and air pollution by SO/sub 2/ in these areas. All trees showed the least acidic reaction in the control area (Bialowieza Forest), more acidic in Niepolomice Forest and the most acidic in the center of Cracow. The buffering capacity of the bark against alkali increased with increasing air pollution. The seasonal fluctuations of pH values and buffering capacity were found. Tree bark is recommended as a sensitive and simple indicator of air pollution. 2. The Wood and Bark of Hardwoods Growing on Southern Pine Sites - A Pictorial Atlas Science.gov (United States) Charles W. McMillin; Floyd G. Manwiller 1980-01-01 Provides a pictorial description of the structure and appearance of 23 pine-site hardwoods, an overview of hardwood anatomy, and data on the resource and certain important physical properties of stemwood and bark. 3. The impact of category structure and training methodology on learning and generalizing within-category representations. Science.gov (United States) Ell, Shawn W; Smith, David B; Peralta, Gabriela; Hélie, Sébastien 2017-08-01 When interacting with categories, representations focused on within-category relationships are often learned, but the conditions promoting within-category representations and their generalizability are unclear. We report the results of three experiments investigating the impact of category structure and training methodology on the learning and generalization of within-category representations (i.e., correlational structure). Participants were trained on either rule-based or information-integration structures using classification (Is the stimulus a member of Category A or Category B?), concept (e.g., Is the stimulus a member of Category A, Yes or No?), or inference (infer the missing component of the stimulus from a given category) and then tested on either an inference task (Experiments 1 and 2) or a classification task (Experiment 3). For the information-integration structure, within-category representations were consistently learned, could be generalized to novel stimuli, and could be generalized to support inference at test. For the rule-based structure, extended inference training resulted in generalization to novel stimuli (Experiment 2) and inference training resulted in generalization to classification (Experiment 3). These data help to clarify the conditions under which within-category representations can be learned. Moreover, these results make an important contribution in highlighting the impact of category structure and training methodology on the generalization of categorical knowledge. 4. A Higher-Order Calculus for Categories DEFF Research Database (Denmark) Cáccamo, Mario José; Winskel, Glynn 2001-01-01 A calculus for a fragment of category theory is presented. The types in the language denote categories and the expressions functors. The judgements of the calculus systematise categorical arguments such as: an expression is functorial in its free variables; two expressions are naturally isomorphic...... in their free variables. There are special binders for limits and more general ends. The rules for limits and ends support an algebraic manipulation of universal constructions as opposed to a more traditional diagrammatic approach. Duality within the calculus and applications in proving continuity are discussed...... with examples. The calculus gives a basis for mechanising a theory of categories in a generic theorem prover like Isabelle.... 5. Kuranishi spaces as a 2-category OpenAIRE Joyce, Dominic 2015-01-01 This is a survey of the author's in-progress book arXiv:1409.6908. 'Kuranishi spaces' were introduced in the work of Fukaya, Oh, Ohta and Ono in symplectic geometry (see e.g. arXiv:1503.07631), as the geometric structure on moduli spaces ofJ$-holomorphic curves. We propose a new definition of Kuranishi space, which has the nice property that they form a 2-category$\\bf Kur$. Thus the homotopy category Ho$({\\bf Kur})is an ordinary category of Kuranishi spaces. Any Fukaya-Oh-Ohta-Ono (FOOO)... 6. Categories of space in music and lifestyles Directory of Open Access Journals (Sweden) Milenković Pavle 2015-01-01 Full Text Available This paper discusses the connection between categories of space in music, music production and lifestyles. The relations between the symbolic space of social connections and musical contents in the social space of various status interactions is complex and contradictory. Category of space in the music exists in four forms. Categories of space in the description of the experience of the musical works, as well as in the way of music production (spacing are the integral part of the special way of consumption of these works (home Hi-Fi, and represent the social status, ways of cultural consumption and habitus in general. 7. Dose mapping in category I irradiators International Nuclear Information System (INIS) Mondal, Sandip; Shinde, S.H.; Mhatre, S.G.V. 2012-01-01 Category I irradiators such as Gamma Chambers and Blood Irradiators are compact self shielded, dry source storage gamma irradiators offering irradiation volume of few hundred cubic centimeters. In the present work, dose distribution profiles along the central vertical plane of the irradiation volume of Gamma Chamber 900 and Blood Irradiator 2000 were measured using Fricke, FBX, and alanine dosimeters. Measured dose distribution profiles in Gamma Chamber 900 differed from the typical generic dose distribution pattern whereas that in Blood Irradiator 2000 was in agreement with the typical pattern. All reagents used were of analytical reagent grade and were used without further purification. Preparation and dose estimations of Fricke and FBX were carried out as recommended. Alanine pellets were directly placed in precleaned polystyrene container having dimensions 6.5 mm o.d., 32 mm height and 3 mm wall thickness. For these dosimeters, dose measurements were made using e-scan Bruker BioSpin alanine dedicated ESR spectrometer. Specially designed perspex jigs were used during irradiation in Gamma Chamber 900 and Blood Irradiator 2000. These jigs provided the reproducible geometry during irradiation, Absorbance measurements were made using a spectrophotometer calibrated as per the recommended procedure. In Gamma Chamber 900, there is a dose distribution variation of about 34% from top to the center, 18% from center to the bottom, and 15% from center to the periphery. Such a dose distribution profile is largely deviating from the typical profile wherein 15% variation is observed from center to the periphery on all sides. Further investigation showed that there was a nonalignment in the source and sample chamber. However, in Blood Irradiator 2000, there is a dose distribution variation of about 20% from top to the center, 15% from center to the bottom, and 12% from center to the periphery. This pattern is very much similar to the typical profile. Hence it is recommended 8. Evaluation of phytochemical and pharmacological properties of Aegiceras corniculatum Blanco (Myrsinaceae) bark OpenAIRE Bose, Utpal; Bala, Vaskor; Rahman, Ahmed A.; Shahid, Israt Z. 2010-01-01 The methanol extract of the dried barks of Aegiceras corniculatum Blanco (Myrsinaceae) was investigated for its possible antinociceptive, cytotoxic and antidiarrhoeal activities in animal models. The preliminary studies of A. corniculatum bark showed the presence of alkaloids, glycosides, steroids, flavonoids, saponins and tannins. The extract produced significant writhing inhibition in acetic acid-induced writhing in mice at the oral dose of 250 and 500 mg/kg body weight (P < 0.001) comp... 9. Antioxidant Capacity and Proanthocyanidin Composition of the Bark of Metasequoia glyptostroboides OpenAIRE Chen, Fengyang; Zhang, Lin; Zong, Shuling; Xu, Shifang; Li, Xiaoyu; Ye, Yiping 2014-01-01 Metasequoia glyptostroboides Hu et Cheng is the only living species in the genus Metasequoia Miki ex Hu et Cheng (Taxodiaceae), which is well known as a “living fossil” species. In the Chinese folk medicine, the leaves and bark of M. glyptostroboides are used as antimicrobic, analgesic, and anti-inflammatory drug for dermatic diseases. This study is the first to report the free radical scavenging capacity, antioxidant activity, and proanthocyanidin composition of the bark of M. glyptostroboid... 10. Oak bark allometry and fire survival strategies in the Chihuahuan desert Sky Islands, Texas, USA. Science.gov (United States) Schwilk, Dylan W; Gaetani, Maria S; Poulos, Helen M 2013-01-01 Trees may survive fire through persistence of above or below ground structures. Investment in bark aids in above-ground survival while investment in carbohydrate storage aids in recovery through resprouting and is especially important following above-ground tissue loss. We investigated bark allocation and carbohydrate investment in eight common oak (Quercus) species of Sky Island mountain ranges in west Texas. We hypothesized that relative investment in bark and carbohydrates changes with tree age and with fire regime: We predicted delayed investment in bark (positive allometry) and early investment in carbohydrates (negative allometry) under lower frequency, high severity fire regimes found in wetter microclimates. Common oaks of the Texas Trans-Pecos region (Quercus emoryi, Q. gambelii, Q. gravesii, Q. grisea, Q. hypoleucoides, Q. muehlenbergii, and Q. pungens) were sampled in three mountain ranges with historically mixed fire regimes: the Chisos Mountains, the Davis Mountains and the Guadalupe Mountains. Bark thickness was measured on individuals representing the full span of sizes found. Carbohydrate concentration in taproots was measured after initial leaf flush. Bark thickness was compared to bole diameter and allometries were analyzed using major axis regression on log-transformed measurements. We found that bark allocation strategies varied among species that can co-occur but have different habitat preferences. Investment patterns in bark were related to soil moisture preference and drought tolerance and, by proxy, to expected fire regime. Dry site species had shallower allometries with allometric coefficients ranging from less than one (negative allometry) to near one (isometric investment). Wet site species, on the other hand, had larger allometric coefficients, indicating delayed investment to defense. Contrary to our expectation, root carbohydrate concentrations were similar across all species and sizes, suggesting that any differences in below ground 11. Oak bark allometry and fire survival strategies in the Chihuahuan desert Sky Islands, Texas, USA. Directory of Open Access Journals (Sweden) Dylan W Schwilk Full Text Available Trees may survive fire through persistence of above or below ground structures. Investment in bark aids in above-ground survival while investment in carbohydrate storage aids in recovery through resprouting and is especially important following above-ground tissue loss. We investigated bark allocation and carbohydrate investment in eight common oak (Quercus species of Sky Island mountain ranges in west Texas. We hypothesized that relative investment in bark and carbohydrates changes with tree age and with fire regime: We predicted delayed investment in bark (positive allometry and early investment in carbohydrates (negative allometry under lower frequency, high severity fire regimes found in wetter microclimates. Common oaks of the Texas Trans-Pecos region (Quercus emoryi, Q. gambelii, Q. gravesii, Q. grisea, Q. hypoleucoides, Q. muehlenbergii, and Q. pungens were sampled in three mountain ranges with historically mixed fire regimes: the Chisos Mountains, the Davis Mountains and the Guadalupe Mountains. Bark thickness was measured on individuals representing the full span of sizes found. Carbohydrate concentration in taproots was measured after initial leaf flush. Bark thickness was compared to bole diameter and allometries were analyzed using major axis regression on log-transformed measurements. We found that bark allocation strategies varied among species that can co-occur but have different habitat preferences. Investment patterns in bark were related to soil moisture preference and drought tolerance and, by proxy, to expected fire regime. Dry site species had shallower allometries with allometric coefficients ranging from less than one (negative allometry to near one (isometric investment. Wet site species, on the other hand, had larger allometric coefficients, indicating delayed investment to defense. Contrary to our expectation, root carbohydrate concentrations were similar across all species and sizes, suggesting that any differences in 12. Seasonal flight patterns of the Spruce bark beetle (Ips typographus) in Sweden OpenAIRE Öhrn, Petter 2012-01-01 The major bark beetle threat to Norway spruce (Picea abies (L.) Karst.) in Eurasia is the spruce bark beetle Ips typographus. Beetles cause damage after population build-up in defenseless trees. To minimize attacks, timely removal of these trees is important. This is practiced by clearing of wind throws and sanitation felling. Thus, knowledge about the region-specific flight pattern and voltinism of I. typographus is necessary for efficient pest management. This thesis focuses on the ... 13. Moessbauer spectroscopic study of iron in Japanese cedar bark (Paper No. HF-02) International Nuclear Information System (INIS) Singh, T.B.; Ichikuni, M. 1990-02-01 The bark samples of Japanese cedar collected from mountainous and urban areas were characterised by Moessbauer spectroscopy. The Moessbauer spectra showed that iron in the bark samples was distributed among paramagnetic Fe 2+ , Fe 3+ and magnetic iron and their relative abundance changed appreciably from one area to other. Further, low Fe 2+ /Fe 3+ ratio and high magnetic iron in urban samples indicated an influence of human activities. (author). 1 tab., 1 fig 14. Antivenom potential of ethanolic extract of Cordia macleodii bark against Naja venom OpenAIRE Pranay Soni; Surendra H. Bodakhe 2014-01-01 Objective: To evaluate the antivenom potential of ethanolic extract of bark of Cordia macleodii against Naja venom induced pharmacological effects such as lethality, hemorrhagic lesion, necrotizing lesion, edema, cardiotoxicity and neurotoxicity. Methods: Wistar strain rats were challenged with Naja venom and treated with the ethanolic extract of Cordia macleodii bark. The effectiveness of the extract to neutralize the lethalities of Naja venom was investigated as recommended by WHO. Re... 15. Optical solar energy adaptations and radiative temperature control of green leaves and tree barks Energy Technology Data Exchange (ETDEWEB) Henrion, Wolfgang; Tributsch, Helmut [Department of Si-Photovoltaik and Solare Energetik, Hahn-Meitner-Institut Berlin, 14109 Berlin (Germany) 2009-01-15 Trees have adapted to keep leaves and barks cool in sunshine and can serve as interesting bionic model systems for radiative cooling. Silicon solar cells, on the other hand, loose up to one third of their energy efficiency due to heating in intensive sunshine. It is shown that green leaves minimize absorption of useful radiation and allow efficient infrared thermal emission. Since elevated temperatures are detrimental for tensile water flow in the Xylem tissue below barks, the optical properties of barks should also have evolved so as to avoid excessive heating. This was tested by performing optical studies with tree bark samples from representative trees. It was found that tree barks have optimized their reflection of incoming sunlight between 0.7 and 2 {mu}m. This is approximately the optical window in which solar light is transmitted and reflected by green vegetation. Simultaneously, the tree bark is highly absorbing and thus radiation emitting between 6 and 10 {mu}m. These two properties, mainly provided by tannins, create optimal conditions for radiative temperature control. In addition, tannins seem to have adopted a function as mediators for excitation energy towards photo-antioxidative activity for control of radiation damage. The results obtained are used to discuss challenges for future solar cell optimization. (author) 16. Words can slow down category learning. Science.gov (United States) Brojde, Chandra L; Porter, Chelsea; Colunga, Eliana 2011-08-01 Words have been shown to influence many cognitive tasks, including category learning. Most demonstrations of these effects have focused on instances in which words facilitate performance. One possibility is that words augment representations, predicting an across the-board benefit of words during category learning. We propose that words shift attention to dimensions that have been historically predictive in similar contexts. Under this account, there should be cases in which words are detrimental to performance. The results from two experiments show that words impair learning of object categories under some conditions. Experiment 1 shows that words hurt performance when learning to categorize by texture. Experiment 2 shows that words also hurt when learning to categorize by brightness, leading to selectively attending to shape when both shape and hue could be used to correctly categorize stimuli. We suggest that both the positive and negative effects of words have developmental origins in the history of word usage while learning categories. [corrected 17. Category-specificity in visual object recognition DEFF Research Database (Denmark) Gerlach, Christian 2009-01-01 Are all categories of objects recognized in the same manner visually? Evidence from neuropsychology suggests they are not: some brain damaged patients are more impaired in recognizing natural objects than artefacts whereas others show the opposite impairment. Category-effects have also been...... demonstrated in neurologically intact subjects, but the findings are contradictory and there is no agreement as to why category-effects arise. This article presents a Pre-semantic Account of Category Effects (PACE) in visual object recognition. PACE assumes two processing stages: shape configuration (the...... binding of shape elements into elaborate shape descriptions) and selection (among competing representations in visual long-term memory), which are held to be differentially affected by the structural similarity between objects. Drawing on evidence from clinical studies, experimental studies... 18. Visual object recognition and category-specificity DEFF Research Database (Denmark) Gerlach, Christian This thesis is based on seven published papers. The majority of the papers address two topics in visual object recognition: (i) category-effects at pre-semantic stages, and (ii) the integration of visual elements into elaborate shape descriptions corresponding to whole objects or large object parts...... (shape configuration). In the early writings these two topics were examined more or less independently. In later works, findings concerning category-effects and shape configuration merge into an integrated model, termed RACE, advanced to explain category-effects arising at pre-semantic stages in visual...... in visual long-term memory. In the thesis it is described how this simple model can account for a wide range of findings on category-specificity in both patients with brain damage and normal subjects. Finally, two hypotheses regarding the neural substrates of the model's components - and how activation... 19. Uniform Reserve Training and Retirement Category Administration National Research Council Canada - National Science Library Kohner, D 1997-01-01 This Instruction implement policy as provided in DoD Directive 1215.6, assigns responsibilities and prescribes procedures that pertain to the designation and use of uniform Reserve component (RC) categories (RCCs... 20. Topoi the categorial analysis of logic CERN Document Server Goldblatt, Robert 2013-01-01 A classic exposition of a branch of mathematical logic that uses category theory, this text is suitable for advanced undergraduates and graduate students and accessible to both philosophically and mathematically oriented readers. 1. Comparing two K-category assignments by a K-category correlation coefficient DEFF Research Database (Denmark) Gorodkin, Jan 2004-01-01 Predicted assignments of biological sequences are often evaluated by Matthews correlation coefficient. However, Matthews correlation coefficient applies only to cases where the assignments belong to two categories, and cases with more than two categories are often artificially forced into two...... categories by considering what belongs and what does not belong to one of the categories, leading to the loss of information. Here, an extended correlation coefficient that applies to K-categories is proposed, and this measure is shown to be highly applicable for evaluating prediction of RNA secondary... 2. Investigation of antioxidant activity of some fruit stem barks in the Eastern Black Sea Region Directory of Open Access Journals (Sweden) Aytaç Güder 2016-10-01 Full Text Available Antioxidant compounds in food play an important role as a health protecting factor. Scientific evidence suggests that antioxidants reduce the risk for chronic diseases including cancer and heart disease. Primary sources of naturally occurring antioxidants are whole grains, fruits and vegetables. Antioxidant activity can be investigated by using different methods such as total antioxidant activity, hydrogen peroxide and DPPH free radical scavenging activities, metal-chelating activity, total phenolic and flavonoid contents and others. In this study, antioxidant activity of the ethanol-water extracts of three stem barks, Kiwi (Actinidia chinensis Planch. (AC, lemon (Citrus limon (L. Burm. f. (CL and chery laurel (Laurocerasus officinalis Roem. (LO has been designated. According to FTC method, the total antioxidant activities (% of AC, CL and LO have been determined as 73.35, 67.59 and 61.62, respectively. The DPPH radical scavenging activities of AC, CL, LO, BHA, RUT and TRO in terms of SC50 values (µg/mL were found as 50.52, 56.56, 98.18, 8.58, 17.01, 26.84, respectively. Total phenolic and total flavonoid contents in AC, CL and LO ranged from 850.71 to 457.79 µg gallic acid equivalent/g and 58.77 to 22.91 µg of catechin equivalents/g, respectively. In conclusion, the extracts of AC showed higher antioxidant activity than the other samples so needs further exploration for its effective use in pharmaceutical and medicine sectors. 3. A new pentacyclic phenol and other constituents from the root bark of Bauhinia racemosa Lamk. Science.gov (United States) Jain, Renuka; Yadav, Namita; Bhagchandani, Teena; Jain, Satish C 2013-10-01 This work reported the isolation of one unknown (1) and 10 known compounds (2-11) from the root bark of Bauhinia racemosa Lamk. (family: Caesalpiniaceae). Racemosolone (1) was characterised as a pentacyclic phenolic compound possessing an unusual skeleton with a cycloheptane ring and a rare furopyran moiety. The structure elucidation was carried out on the basis of UV, infrared (IR), HR-ESI-MS, 1D and 2D NMR spectra and finally confirmed by the single crystal X-ray analysis. The known compounds were characterised as n-tetracosane, β-sitosteryl stearate, eicosanoic acid, stigmasterol, β-sitosterol, racemosol, octacosyl ferulate, de-O-methyl racemosol, lupeol and 1,7,8,12b-tetrahydro-2,2,4-trimethyl-2H-benzo[6,7]cyclohepta [1,2,3-de] [1] benzopyran-5,10,11 triol on the basis of spectroscopic data comparison with the literature value. Compounds with skeleton similar to 1 have never been reported from any natural or other source. 4. Use of bark-derived pyrolysis oils ass a phenol substitute in structural panel adhesives Energy Technology Data Exchange (ETDEWEB) Louisiana Pacific Corp 2004-03-01 The main objective of this program was to pilot the world's first commercial-scale production of an acceptable phenol formaldehyde (PF) resin containing natural resin (NR) ingredients, for use as an adhesive in Oriented-Strand Board (OSB) and plywood panel products. Natural Resin products, specifically MNRP are not lignin ''fillers''. They are chemically active, natural phenolics that effectively displace significant amounts of phenol in PF resins, and which are extracted from bark-derived and wood-derived bio-oils. Other objectives included the enhancement of the economics of NR (MNRP) production by optimizing the production of certain Rapid Thermal Processing (RTP{trademark}) byproducts, particularly char and activated carbon. The options were to activate the char for use in waste-water and/or stack gas purification. The preliminary results indicate that RTP{trademark} carbon may ultimately serve as a feedstock for activated carbon synthesis, as a fuel to be used within the wood product mill, or a fuel for an electrical power generating facility. Incorporation of the char as an industrial heat source for use in mill operations was L-P's initial intention for the carbon, and was also of interest to Weyerhaeuser as they stepped into in the project. 5. Mixed quantum states in higher categories Directory of Open Access Journals (Sweden) Chris Heunen 2014-12-01 Full Text Available There are two ways to describe the interaction between classical and quantum information categorically: one based on completely positive maps between Frobenius algebras, the other using symmetric monoidal 2-categories. This paper makes a first step towards combining the two. The integrated approach allows a unified description of quantum teleportation and classical encryption in a single 2-category, as well as a universal security proof applicable simultaneously to both scenarios. 6. Derivation of plutonium-239 materials disposition categories International Nuclear Information System (INIS) Brough, W.G. 1995-01-01 At this time, the Office of Fissile Materials Disposition within the DOE, is assessing alternatives for the disposition of excess fissile materials. To facilitate the assessment, the Plutonium-Bearing Materials Feed Report for the DOE Fissile Materials Disposition Program Alternatives report was written. The development of the material categories and the derivation of the inventory quantities associated with those categories is documented in this report 7. Monoidal categories and topological field theory CERN Document Server Turaev, Vladimir 2017-01-01 This monograph is devoted to monoidal categories and their connections with 3-dimensional topological field theories. Starting with basic definitions, it proceeds to the forefront of current research. Part 1 introduces monoidal categories and several of their classes, including rigid, pivotal, spherical, fusion, braided, and modular categories. It then presents deep theorems of Müger on the center of a pivotal fusion category. These theorems are proved in Part 2 using the theory of Hopf monads. In Part 3 the authors define the notion of a topological quantum field theory (TQFT) and construct a Turaev-Viro-type 3-dimensional state sum TQFT from a spherical fusion category. Lastly, in Part 4 this construction is extended to 3-manifolds with colored ribbon graphs, yielding a so-called graph TQFT (and, consequently, a 3-2-1 extended TQFT). The authors then prove the main result of the monograph: the state sum graph TQFT derived from any spherical fusion category is isomorphic to the Reshetikhin-Turaev surgery gr... 8. Analysis of commercial proanthocyanidins. Part 4: solid state (13)C NMR as a tool for in situ analysis of proanthocyanidin tannins, in heartwood and bark of quebracho and acacia, and related species. Science.gov (United States) Reid, David G; Bonnet, Susan L; Kemp, Gabre; van der Westhuizen, Jan H 2013-10-01 (13)C NMR is an effective method of characterizing proanthocyanidin (PAC) tannins in quebracho (Schinopsis lorentzii) heartwood and black wattle (Acacia mearnsii) bark, before and after commercial extraction. The B-rings of the constituent flavan-3-ols, catechols (quebracho) or pyrogallols (wattle), are recognized in unprocessed source materials by "marker" signals at ca. 118 or 105ppm, respectively. NMR allows the minimum extraction efficiency to be calculated; ca. 30%, and ca. 80%, for quebracho heartwood and black wattle bark, respectively. NMR can also identify PAC tannin (predominantly robinetinidin), and compare tannin content, in bark from other acacia species; tannin content decreases in the order A. mearnsii, Acacia pycnantha (87% of A. mearnsii), Acacia dealbata and Acacia decurrens (each 74%) and Acacia karroo (30%). Heartwood from an underexploited PAC tannin source, Searsia lancea, taxonomically close to quebracho, shows abundant profisetinidin and catechin PACs. NMR offers the advantage of being applicable to source materials in their native state, and has potential applications in optimizing extraction processes, identification of tannin sources, and characterization of tannin content in cultivar yield improvement programmes. Copyright © 2013 Elsevier Ltd. All rights reserved. 9. 40 CFR 427.34 - Pretreatment standards for existing sources. Science.gov (United States) 2010-07-01 ...) EFFLUENT GUIDELINES AND STANDARDS ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Paper (Starch Binder) Subcategory § 427.34 Pretreatment standards for existing sources. Any existing source subject to... 10. 40 CFR 427.44 - Pretreatment standards for existing sources. Science.gov (United States) 2010-07-01 ...) EFFLUENT GUIDELINES AND STANDARDS ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Paper (Elastomeric Binder) Subcategory § 427.44 Pretreatment standards for existing sources. Any existing source subject to... 11. 40 CFR 417.84 - Pretreatment standards for existing sources. Science.gov (United States) 2010-07-01 ...) EFFLUENT GUIDELINES AND STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Manufacture of Liquid Soaps Subcategory § 417.84 Pretreatment standards for existing sources. Any existing source... 12. Energy information data base: energy categories International Nuclear Information System (INIS) 1980-03-01 Citations entered into DOE's computerized bibliographic information system are assigned six-digit subject category numbers to group information broadly for storage, retrieval, and manipulation. These numbers are used in the preparation of printed documents, such as bibliographies and abstract journals, to arrange the citations and as searching aids in the on-line system, DOE/RECON. This document has been prepared for use by those individuals responsible for the assignment of category numbers to documents being entered into the Technical Information Center (TIC) system, those individuals and organizations processing magnetic tape copies of the files, those individuals doing on-line searching for information in TIC-created files, and others who, having no access to RECON, need printed copy. The six-digit numbers assigned to documents are listed, along with the category names and text to define the scope of interest. Asterisks highlight those categories added or changed since the previous printing, and a subject index further details the subject content of each category 13. When does fading enhance perceptual category learning? Science.gov (United States) Pashler, Harold; Mozer, Michael C 2013-07-01 Training that uses exaggerated versions of a stimulus discrimination (fading) has sometimes been found to enhance category learning, mostly in studies involving animals and impaired populations. However, little is known about whether and when fading facilitates learning for typical individuals. This issue was explored in 7 experiments. In Experiments 1 and 2, observers discriminated stimuli based on a single sensory continuum (time duration and line length, respectively). Adaptive fading dramatically improved performance in training (unsurprisingly) but did not enhance learning as assessed in a final test. The same was true for nonadaptive linear fading (Experiment 3). However, when variation in length (predicting category membership) was embedded among other (category-irrelevant) variation, fading dramatically enhanced not only performance in training but also learning as assessed in a final test (Experiments 4 and 5). Fading also helped learners to acquire a color saturation discrimination amid category-irrelevant variation in hue and brightness, although this learning proved transitory after feedback was withdrawn (Experiment 7). Theoretical implications are discussed, and we argue that fading should have practical utility in naturalistic category learning tasks, which involve extremely high dimensional stimuli and many irrelevant dimensions. PsycINFO Database Record (c) 2013 APA, all rights reserved. 14. TV MEDIA ANALYSIS FOR BANKING CATEGORY (2012 Directory of Open Access Journals (Sweden) Alexandra Elena POȘTOACĂ 2014-04-01 Full Text Available This article represents a short overview of the media landscape for the banking category in Romania in 2012. Unlike the other categories (for example FMCG – fast moving consumer goods, the banking category is more complex because every bank can communicate for a wider range of products (credits, deposits, packages dedicated to students, pensioners and other types of banking products. In the first part of this paper, there will be presented some theoretical notions about media planning and media analyses in order for the lecturer to easily go through the second part of the article. The second part of the paper will only refer to TV analyses. This media channel owns the highest budget share in our category, and also in the media mix of every important player, active in the Romanian market. The analyses will show which bank communicated most effectively, which is the most important spender on TV, what banking products had the largest budget allocated, which is the pattern for this category when it comes to allocating audience points for each day interval and so on. The starting point of this analyses is based on the secondary data obtained from InfoSys+ which is the world’s leading TV analyses software, used in more than 29 countries by 8000+ users. 15. Induced terpene accumulation in Norway spruce inhibits bark beetle colonization in a dose-dependent manner. Directory of Open Access Journals (Sweden) Tao Zhao Full Text Available Tree-killing bark beetles (Coleoptera, Scolytinae are among the most economically and ecologically important forest pests in the northern hemisphere. Induction of terpenoid-based oleoresin has long been considered important in conifer defense against bark beetles, but it has been difficult to demonstrate a direct correlation between terpene levels and resistance to bark beetle colonization.To test for inhibitory effects of induced terpenes on colonization by the spruce bark beetle Ips typographus (L. we inoculated 20 mature Norway spruce Picea abies (L. Karsten trees with a virulent fungus associated with the beetle, Ceratocystis polonica (Siem. C. Moreau, and investigated induced terpene levels and beetle colonization in the bark.Fungal inoculation induced very strong and highly variable terpene accumulation 35 days after inoculation. Trees with high induced terpene levels (n = 7 had only 4.9% as many beetle attacks (5.1 vs. 103.5 attacks m(-2 and 2.6% as much gallery length (0.029 m m(-2 vs. 1.11 m m(-2 as trees with low terpene levels (n = 6. There was a highly significant rank correlation between terpene levels at day 35 and beetle colonization in individual trees. The relationship between induced terpene levels and beetle colonization was not linear but thresholded: above a low threshold concentration of ∼100 mg terpene g(-1 dry phloem trees suffered only moderate beetle colonization, and above a high threshold of ∼200 mg terpene g(-1 dry phloem trees were virtually unattacked.This is the first study demonstrating a dose-dependent relationship between induced terpenes and tree resistance to bark beetle colonization under field conditions, indicating that terpene induction may be instrumental in tree resistance. This knowledge could be useful for developing management strategies that decrease the impact of tree-killing bark beetles. 16. Extraction and Hydrophobic Modification of Cotton Stalk Bark Fiber Directory of Open Access Journals (Sweden) Ya-Yu Li 2016-01-01 Full Text Available Cotton stalk bark fiber (CSBF was extracted at high temperature and under high pressure, under the condition of the alkali content of 11 wt%. Experimental results proved that the extraction yield of CSBF was 27.3 wt%, and the residual alkali concentration was 2.1 wt%. Then five kinds of modifiers including methyl methacrylate (MMA, MMA plus initiator, epoxy propane, copper ethanolamine, and silane coupling agent were chosen to modify the surface of CSBF. It was found by measuring water retention value (WRV that these five kinds of modifiers were all effective and the silane coupling agent was best modifier among all. The optimal modifying conditions of silane coupling agent were obtained: modifier concentration was 5%, the mixing temperature was 20°C, the mixing time was 1 h, and vacuum drying time was 1 h. Under the optimal condition, the WRV of the modified CSBF was 89%. It is expected that these modified CSBF may be a filler with strengthening effect in wood plastic composites (WPC fields. 17. Psychopharmacological properties of saponins from Randia nilotica stem bark. Science.gov (United States) Danjuma, N M; Chindo, B A; Abdu-Aguye, I; Anuka, J A; Hussaini, I M 2014-01-01 Decoctions of Randia nilotica Stapf. (Rubiaceae) have been used in the Nigerian traditional medicine for the management of epilepsy, anxiety, depression and psychosis for many years and their efficacies are widely acclaimed among the rural communities of Northern Nigeria. The aim of this study is to establish whether the saponins present in R. nilotica are responsible for its acclaimed beneficial effects in Nigerian traditional medicine. The behavioural properties of the saponin-rich fraction (SFRN) of R. nilotica stem bark were studied on hole-board, diazepam-induced sleep, rota-rod and beam-walking in mice. The anticonvulsant properties of SFRN were also examined on maximal electroshock, pentylenetetrazole- and strychnine-induced seizures in mice. The intraperitoneal LD₅₀ of SFRN in mice and rats were estimated to be 11.1 and 70.7 mg/kg, respectively. SFRN significantly prolonged the duration of diazepam-induced sleep; diminished head dip counts in the hole-board test and protected mice against maximal electroshock seizures. SFRN failed to protect mice against pentylenetetrazole- and strychnine-induced seizures; and had no effect on motor coordination on the rota-rod treadmill at the doses tested. SFRN significantly decreased the number of foot slips in the beam-walking assay in mice with no effect on time to reach the goal box. This study provides evidence of the psychopharmacological effects of SFRN, thus supporting further development of the psychoactive components as remedies for epilepsy. 18. Antigenotoxic prenylated flavonoids from stem bark of Erythrina latissima. Science.gov (United States) Zarev, Yancho; Foubert, Kenn; Lucia de Almeida, Vera; Anthonissen, Roel; Elgorashi, Esameldin; Apers, Sandra; Ionkova, Iliana; Verschaeve, Luc; Pieters, Luc 2017-09-01 A series of prenylated flavonoids was obtained from antigenotoxic extracts and fractions of stem bark of Erythrina latissima E. Mey (Leguminosae). In addition to five constituents never reported before, i.e. (2S)-5,7-dihydroxy-2-(4-hydroxy-2-(prop-1-en-2-yl)-2,3-dihydrobenzofuran-6-yl)chroman-4-one (erylatissin D), (2S)-5,7-dihydroxy-2-(4-methoxy-2-(prop-1-en-2-yl)-2,3-dihydrobenzofuran-6-yl)chroman-4-one (erylatissin E), 5,7-dihydroxy-3-(4-methoxy-2-(prop-1-en-2-yl)-2,3-dihydrobenzofuran-6-yl)-4H-chromen-4-one (erylatissin F), (2S)-5,7,8'-trihydroxy-2',2'-dimethyl-[2,6'-bichroman]-4-one (erylatissin G) and (2S)-5,7-dihydroxy-8'-methoxy-2',2'-dimethyl-[2,6'-bichroman]-4-one (dihydroabyssinin I), 18 known flavonoids were identified. Evaluation of the antigenotoxic properties (against genotoxicity induced by aflatoxin B1, metabolically activated) in the Vitotox assay revealed that most flavonoids were active. Sigmoidin A and B showed the highest activity, with an IC 50 value of 18.7 μg/mL, equivalent to that of curcumin (IC 50 18.4 μg/mL), used as a reference antigenotoxic compound. Copyright © 2017 Elsevier Ltd. All rights reserved. 19. Characterization of tannin-based adhesives from Acacia mangium barks International Nuclear Information System (INIS) Siti Fatahiyah Mohamada; Pizzi, Antonio 2010-01-01 The aim of this work is to demonstrate the performances of Acacia Mangium tannin-based tannin designed as adhesive in the particleboard production. The tannin was extracted from acacia mangium barks in differences medium extraction. Three difference medium, (1)Water (Control), (2)Na 2 SO 3 (4 %) / Na 2 CO 3 (0.4 %) and (3) Na 2 SO 3 (8 %) / Na 2 CO 3 (0.8 %) used, the (3) medium extraction produce then highest yield (25.8 %) follow the (2) medium extraction (21.6%) and the less yield (17.7%). To evaluate the mechanical performances of optimal Acacia mangium tannin-based adhesives, particleboard were produced using 3 differences hardener and mechanical properties (Internal bonding) were investigated. The performance of these panels is comparable to those of particle panels commercial. The results showed that particleboard panels bonded with parafomaldehid (0.392 Mpa) exhibited better mechanical properties, continue particleboard panel hardened with hexamine (0.367 MPa) and particleboard panel bonded with glyoxol-tannin based adhesives (0.244 MPa). This show the suitable harder for acacia mangium tannin are formaldehyde > hexamine > glyoxol. (author) 20. Sex Work Criminalization Is Barking Up the Wrong Tree. Science.gov (United States) Vanwesenbeeck, Ine 2017-08-01 There is a notable shift toward more repression and criminalization in sex work policies, in Europe and elsewhere. So-called neo-abolitionism reduces sex work to trafficking, with increased policing and persecution as a result. Punitive "demand reduction" strategies are progressively more popular. These developments call for a review of what we know about the effects of punishing and repressive regimes vis-à-vis sex work. From the evidence presented, sex work repression and criminalization are branded as "waterbed politics" that push and shove sex workers around with an overload of controls and regulations that in the end only make things worse. It is illustrated how criminalization and repression make it less likely that commercial sex is worker-controlled, non-abusive, and non-exploitative. Criminalization is seriously at odds with human rights and public health principles. It is concluded that sex work criminalization is barking up the wrong tree because it is fighting sex instead of crime and it is not offering any solution for the structural conditions that sex work (its ugly sides included) is rooted in. Sex work repression travels a dead-end street and holds no promises whatsoever for a better future. To fight poverty and gendered inequalities, the criminal justice system simply is not the right instrument. The reasons for the persistent stigma on sex work as well as for its present revival are considered. 1. Cytotoxic Flavones from the Stem Bark of Bougainvillea spectabilis Willd. Science.gov (United States) Do, Lien T M; Aree, Thammarat; Siripong, Pongpun; Vo, Nga T; Nguyen, Tuyet T A; Nguyen, Phung K P; Tip-Pyang, Santi 2018-01-01 Five new flavones possessing a fully substituted A-ring with C-6 and C-8 methyl groups, bougainvinones I - M (1: -5: ), along with three known congeners, 2'-hydroxydemethoxymatteucinol (6: ), 5,7,3',4'-tetrahydroxy-3-methoxy-6,8-dimethylflavone (7: ) and 5,7,4'-trihydroxy-3-methoxy-6,8-dimethylflavone (8: ), were isolated from the EtOAc extract of the stem bark of Bougainvillea spectabilis . Their structures were established by means of spectroscopic data (ultraviolet, infrared, high-resolution electrospray ionization mass spectrometry, and one-dimensional and two-dimensional nuclear magnetic resonance) and single-crystal X-ray crystallographic analysis. The in vitro cytotoxicity of all isolated compounds against five cancer cell lines (KB, HeLa S-3, MCF-7, HT-29, and HepG2) was evaluated. Compound 5: showed promising cytotoxic activity against the KB and HeLa S-3 cell lines, with IC 50 values of 7.44 and 6.68 µM. The other compounds exhibited moderate cytotoxicity against the KB cell line. Georg Thieme Verlag KG Stuttgart · New York. 2. Chemical Composition of Sea Buckthorn Leaves, Branches and Bark Directory of Open Access Journals (Sweden) Gradt Ina 2017-06-01 Full Text Available Sea buckthorn leaves and branches presently create waste-/by-products of harvesting after pruning the plants. It is already known that sea buckthorn berries are important for their chemical composition and based on this occupy a wide field in nutrition. We raised the idea that sea buckthorn leaves, branches, and especially the bark, have also an extraordinary chemical composition like the berries. The aim of this study was to describe these by-products. For this purpose, detailed full analyses of corresponding samples from Russia (seven varieties and Germany (four varieties were performed. Especially the dry mass, fat content, proteins, carbohydrates, starch content, and crude fiber were investigated to obtain an overview. Minor components like total phenol content, metals, and water- and fat-soluble vitamins were also studied. All analytical parameters were based on an official collection of analysis methods (German ASU - amtliche Sammlung von Untersuchungsverfahren. The results of the full analysis of leaves and branches show some interesting aspects about the differences between male and female plants. Furthermore, we observed differences between Russian and German sea buckthorn varieties. Investigation of minor components showed that vitamins were present in very low amount (< 0.1 %. 3. Cellulose nanocrystals from acacia bark-Influence of solvent extraction. Science.gov (United States) Taflick, Ticiane; Schwendler, Luana A; Rosa, Simone M L; Bica, Clara I D; Nachtigall, Sônia M B 2017-08-01 The isolation of cellulose nanocrystals from different lignocellulosic materials has shown increased interest in academic and technological research. These materials have excellent mechanical properties and can be used as nanofillers for polymer composites as well as transparent films for various applications. In this work, cellulose isolation was performed following an environmental friendly procedure without chlorine. Cellulose nanocrystals were isolated from the exhausted acacia bark (after the industrial process of extracting tannin) with the objective of evaluating the effect of the solvent extraction steps on the characteristics of cellulose and cellulose nanocrystals. It was also assessed the effect of acid hydrolysis time on the thermal stability, morphology and size of the nanocrystals, through TGA, TEM and light scattering analyses. It was concluded that the extraction step with solvents was important in the isolation of cellulose, but irrelevant in the isolation of cellulose nanocrystals. Light scattering experiments indicated that 30min of hydrolysis was long enough for the isolation of cellulose nanocrystals. Copyright © 2017 Elsevier B.V. All rights reserved. 4. Myxomycetes from the bark of the evergreen oak Quercus ilex Directory of Open Access Journals (Sweden) Wrigley de Basanta, Diana 1998-06-01 Full Text Available The results of 81 moist chamber cultures of bark from living Quercus ilex trees are reponed. A total of 37 taxa are cited, extending the number of species found on this substrate to 55. The presence of Licea deplanata on the Iberian Península is confirmed. Seven new records are included for the province of Madrid. Some data are contributed on species frequency and incubation times.Se presentan los resultados de 81 cultivos en cámara húmeda de corteza de Quercus ilex vivo. Se citan 37 táxones, que amplían a 55 el número de especies de mixomicetes encontrados sobre este sustrato. Se confirma la presencia en la Península Ibérica de Licea deplanata, y se incluyen siete nuevas citas para la provincia de Madrid. Se aportan datos sobre frecuencia de aparición y tiempos de incubación de algunas especies. 5. Grammatical category dissociation in multilingual aphasia. Science.gov (United States) Faroqi-Shah, Yasmeen; Waked, Arifi N 2010-03-01 Word retrieval deficits for specific grammatical categories, such as verbs versus nouns, occur as a consequence of brain damage. Such deficits are informative about the nature of lexical organization in the human brain. This study examined retrieval of grammatical categories across three languages in a trilingual person with aphasia who spoke Arabic, French, and English. In order to delineate the nature of word production difficulty, comprehension was tested, and a variety of concomitant lexical-semantic variables were analysed. The patient demonstrated a consistent noun-verb dissociation in picture naming and narrative speech, with severely impaired production of verbs across all three languages. The cross-linguistically similar noun-verb dissociation, coupled with little evidence of semantic impairment, suggests that (a) the patient has a true "nonsemantic" grammatical category specific deficit, and (b) lexical organization in multilingual speakers shares grammatical class information between languages. The findings of this study contribute to our understanding of the architecture of lexical organization in bilinguals. 6. From Perceptual Categories to Concepts: What Develops? Science.gov (United States) Sloutsky, Vladimir M. 2010-01-01 People are remarkably smart: they use language, possess complex motor skills, make non-trivial inferences, develop and use scientific theories, make laws, and adapt to complex dynamic environments. Much of this knowledge requires concepts and this paper focuses on how people acquire concepts. It is argued that conceptual development progresses from simple perceptual grouping to highly abstract scientific concepts. This proposal of conceptual development has four parts. First, it is argued that categories in the world have different structure. Second, there might be different learning systems (sub-served by different brain mechanisms) that evolved to learn categories of differing structures. Third, these systems exhibit differential maturational course, which affects how categories of different structures are learned in the course of development. And finally, an interaction of these components may result in the developmental transition from perceptual groupings to more abstract concepts. This paper reviews a large body of empirical evidence supporting this proposal. PMID:21116483 7. Lectures on tensor categories and modular functors CERN Document Server Bakalov, Bojko 2000-01-01 This book gives an exposition of the relations among the following three topics: monoidal tensor categories (such as a category of representations of a quantum group), 3-dimensional topological quantum field theory, and 2-dimensional modular functors (which naturally arise in 2-dimensional conformal field theory). The following examples are discussed in detail: the category of representations of a quantum group at a root of unity and the Wess-Zumino-Witten modular functor. The idea that these topics are related first appeared in the physics literature in the study of quantum field theory. Pioneering works of Witten and Moore-Seiberg triggered an avalanche of papers, both physical and mathematical, exploring various aspects of these relations. Upon preparing to lecture on the topic at MIT, however, the authors discovered that the existing literature was difficult and that there were gaps to fill. The text is wholly expository and finely succinct. It gathers results, fills existing gaps, and simplifies some pro... 8. Multimedia category preferences of working engineers Science.gov (United States) Baukal, Charles E.; Ausburn, Lynna J. 2016-09-01 Many have argued for the importance of continuing engineering education (CEE), but relatively few recommendations were found in the literature for how to use multimedia technologies to deliver it most effectively. The study reported here addressed this gap by investigating the multimedia category preferences of working engineers. Four categories of multimedia, with two types in each category, were studied: verbal (text and narration), static graphics (drawing and photograph), dynamic non-interactive graphics (animation and video), and dynamic interactive graphics (simulated virtual reality (VR) and photo-real VR). The results showed that working engineers strongly preferred text over narration and somewhat preferred drawing over photograph, animation over video, and simulated VR over photo-real VR. These results suggest that a variety of multimedia types should be used in the instructional design of CEE content. 9. Shape configuration and category-specificity DEFF Research Database (Denmark) Gerlach, Christian; Law, I; Paulson, Olaf B. 2006-01-01 and fragmented drawings. We also examined whether fragmentation had different impact on the recognition of natural objects and artefacts and found that recognition of artefacts was more affected by fragmentation than recognition of natural objects. Thus, the usual finding of an advantage for artefacts...... in difficult object decision tasks, which is also found in the present experiments with outlines, is reversed when the stimuli are fragmented. This interaction between category (natural versus artefacts) and stimulus type (outlines versus fragmented forms) is in accordance with predictions derived from...... a recent account of category-specificity and lends support to the notion that category-specific impairments can occur for both natural objects and artefacts following damage to pre-semantic stages in visual object recognition. The implications of the present findings are discussed in relation to theories... 10. Polyphenolic Composition and Antioxidant Activity of Aqueous and Ethanolic Extracts from Uncaria tomentosa Bark and Leaves Directory of Open Access Journals (Sweden) Mirtha Navarro-Hoyos 2018-05-01 Full Text Available Uncaria tomentosa constitutes an important source of secondary metabolites with diverse biological activities mainly attributed until recently to alkaloids and triterpenes. We have previously reported for the first-time the polyphenolic profile of extracts from U. tomentosa, using a multi-step process involving organic solvents, as well as their antioxidant capacity, antimicrobial activity on aerial bacteria, and cytotoxicity on cancer cell lines. These promising results prompted the present study using food grade solvents suitable for the elaboration of commercial extracts. We report a detailed study on the polyphenolic composition of aqueous and ethanolic extracts of U. tomentosa bark and leaves (n = 16, using High Performance Liquid Chromatography coupled with Mass Spectrometry (HPLC-DAD/TQ-ESI-MS. A total of 32 compounds were identified, including hydroxybenzoic and hydroxycinnamic acids, flavan-3-ols monomers, procyanidin dimers and trimers, flavalignans–cinchonains and propelargonidin dimers. Our findings showed that the leaves were the richest source of total phenolics and proanthocyanidins, in particular propelargonidin dimers. Two-way Analysis of Variance (ANOVA indicated that the contents of procyanidin and propelargonidin dimers were significantly different (p < 0.05 in function of the plant part, and leaves extracts showed higher contents. Oxygen Radical Absorbance Capacity (ORAC and 2,2-diphenyl-1-picrylhidrazyl (DPPH values indicated higher antioxidant capacity for the leaves (p < 0.05. Further, correlation between both methods and procyanidin dimers was found, particularly between ORAC and propelargonidin dimers. Finally, Principal Component Analysis (PCA analysis results clearly indicated that the leaves are the richest plant part in proanthocyanidins and a very homogenous material, regardless of their origin. Therefore, our findings revealed that both ethanol and water extraction processes are adequate for the elaboration of 11. Simultaneous determination of five characteristic stilbene glycosides in root bark of Morus albus L. (Cortex Mori) using high-performance liquid chromatography. Science.gov (United States) Piao, Shu-juan; Chen, Li-xia; Kang, Ning; Qiu, Feng 2011-01-01 Cortex Mori, one of the well-known traditional Chinese herbal medicines, is derived from the root bark of Morus alba L. according to the China Pharmacopeia. Stilbene glycosides are the main components isolated from aqueous extracts of Morus alba and their content varies depending on where Cortex Mori was collected. We have established a qualitative and quantitative method based on the bioactive stilbene glycosides for control of the quality of Cortex Mori from different sources. To develop a high-performance liquid chromatography coupled with ultraviolet absorption detection for simultaneous quantitative determination of five major characteristic stilbene glycosides in 34 samples of the root bark of Morus alba L. (Cortex Mori) from different sources. The analysis was performed on an ODS column using methanol-water-acetic acid (18: 82: 0.1, v/v/v) as the mobile phase and the peaks were monitored at 320 nm. All calibration curves showed good linearity (r ≥ 0.9991) within test ranges. This method showed good repeatability for the quantification of these five components in Cortex Mori with intra- and inter-day standard deviations less than 2.19% and 1.45%, respectively. The validated method was successfully applied to quantify the five investigated components, including a pair of cis-trans-isomers 1 and 2 and a pair of isomers 4 and 5 in 34 samples of Cortex Mori from different sources. Copyright © 2010 John Wiley & Sons, Ltd. 12. Functional categories in agrammatism: evidence from Greek. Science.gov (United States) Stavrakaki, Stavroula; Kouvava, Sofia 2003-07-01 The aim of this study is twofold. First, to investigate the use of functional categories by two Greek agrammatic aphasics. Second, to discuss the implications of our findings for the characterization of the deficit in agrammatism. The functional categories under investigation were the following: definite and indefinite articles, personal pronouns, aspect, tense, subject-verb agreement, wh-pronouns, complementizers and the mood marker na (=to). Based on data collected through different methods, it is argued that the deficit in agrammatism cannot be described in terms of a structural account but rather by means of difficulties in the implementation of grammatical knowledge. 13. How to Do Things with Categories DEFF Research Database (Denmark) Krabbe, Anders Dahl Consumers and other audiences draw upon cognitive categories when evaluating technological products (Clark, 1985; Kaplan and Tripsas, 2008). Categories such as “mini-van” or “computer” provide labels and conceptual meaning structures that consumers and other market actors draw upon in making sense...... the majority of archival data was collected. Finally, to trace consumer reception of innovations in the design of products and technological innovations, I constructed a data set based on posts from an online hearing aid consumer forum. The initial analysis each spawned into three distinct trajectories... 14. Time trends in the levels and patterns of polycyclic aromatic hydrocarbons (PAHs) in pine bark, litter, and soil after a forest fire. Science.gov (United States) Choi, Sung-Deuk 2014-02-01 Forest fires are known as an important natural source of polycyclic aromatic hydrocarbons (PAHs), but time trends of PAH levels and patterns in various environmental compartments after forest fires have not been thoroughly studied yet. In this study, 16 US-EPA priority PAHs were analyzed for pine bark, litter, and soil samples collected one, three, five, and seven months after a forest fire in Pohang, South Korea. At the first sampling event, the highest levels of ∑16 PAHs were measured for the three types of samples (pine bark: 5,920 ng/g, litter: 1,540 ng/g, and soil: 133 ng/g). Thereafter, there were apparent decreasing trends in PAH levels; the control samples showed the lowest levels (pine bark: 124 ng/g, litter: 75 ng/g, and soil: 26 ng/g). The levels of PAHs in the litter and soil samples normalized by organic carbon (OC) fractions also showed decreasing trends, indicating a direct influence of the forest fire. Among the 16 target PAHs, naphthalene was a dominant compound for all types of samples. Light PAHs with 2-4 rings significantly contributed to the total concentration, and their contribution decreased in the course of time. Runoff by heavy precipitation, evaporation, and degradation of PAHs in the summer were probably the main reasons for the observed time trends. The results of principal component analysis (PCA) and diagnostic ratio also supported that the forest fire was indeed an important source of PAHs in the study area. © 2013. 15. High ice nucleation activity located in blueberry stem bark is linked to primary freeze initiation and adaptive freezing behaviour of the bark Science.gov (United States) Kishimoto, Tadashi; Yamazaki, Hideyuki; Saruwatari, Atsushi; Murakawa, Hiroki; Sekozawa, Yoshihiko; Kuchitsu, Kazuyuki; Price, William S.; Ishikawa, Masaya 2014-01-01 Controlled ice nucleation is an important mechanism in cold-hardy plant tissues for avoiding excessive supercooling of the protoplasm, for inducing extracellular freezing and/or for accommodating ice crystals in specific tissues. To understand its nature, it is necessary to characterize the ice nucleation activity (INA), defined as the ability of a tissue to induce heterogeneous ice nucleation. Few studies have addressed the precise localization of INA in wintering plant tissues in respect of its function. For this purpose, we recently revised a test tube INA assay and examined INA in various tissues of over 600 species. Extremely high levels of INA (−1 to −4 °C) in two wintering blueberry cultivars of contrasting freezing tolerance were found. Their INA was much greater than in other cold-hardy species and was found to be evenly distributed along the stems of the current year's growth. Concentrations of active ice nuclei in the stem were estimated from quantitative analyses. Stem INA was localized mainly in the bark while the xylem and pith had much lower INA. Bark INA was located mostly in the cell wall fraction (cell walls and intercellular structural components). Intracellular fractions had much less INA. Some cultivar differences were identified. The results corresponded closely with the intrinsic freezing behaviour (extracellular freezing) of the bark, icicle accumulation in the bark and initial ice nucleation in the stem under dry surface conditions. Stem INA was resistant to various antimicrobial treatments. These properties and specific localization imply that high INA in blueberry stems is of intrinsic origin and contributes to the spontaneous initiation of freezing in extracellular spaces of the bark by acting as a subfreezing temperature sensor. PMID:25082142 16. Genotype variation in bark texture drives lichen community assembly across multiple environments. Science.gov (United States) Lamit, L J; Lau, M K; Naesborg, R Reese; Wojtowicz, T; Whitham, T G; Gehring, C A 2015-04-01 A major goal of community genetics is to understand the influence of genetic variation within a species on ecological communities. Although well-documented for some organisms, additional research is necessary to understand the relative and interactive effects of genotype and environment on biodiversity, identify mechanisms through which tree genotype influences communities, and connect this emerging field with existing themes in ecology. We employ an underutilized but ecologically significant group of organisms, epiphytic bark lichens, to understand the relative importance of Populus angustifolia (narrowleaf cottonwood) genotype and environment on associated organisms within the context of community assembly and host ontogeny. Several key findings emerged. (1) In a single common garden, tree genotype explained 18-33% and 51% of the variation in lichen community variables and rough bark cover, respectively. (2) Across replicated common gardens, tree genotype affected lichen species richness, total lichen cover, lichen species composition, and rough bark cover, whereas environment only influenced composition and there were no genotype by environment interactions. (3) Rough bark cover was positively correlated with total lichen cover and richness, and was associated with a shift in species composition; these patterns occurred with variation in rough bark cover among tree genotypes of the same age in common gardens and with increasing rough bark cover along a -40 year tree age gradient in a natural riparian stand. (4) In a common garden, 20-year-old parent trees with smooth bark had poorly developed lichen communities, similar to their 10-year-old ramets (root suckers) growing in close proximity, while parent trees with high rough bark cover had more developed communities than their ramets. These findings indicate that epiphytic lichens are influenced by host genotype, an effect that is robust across divergent environments. Furthermore, the response to tree genotype is 17. Analgesic and anti-inflammatory activity of root bark of Grewia asiatica Linn. in rodents. Science.gov (United States) Paviaya, Udaybhan Singh; Kumar, Parveen; Wanjari, Manish M; Thenmozhi, S; Balakrishnan, B R 2013-01-01 Grewia asiatica Linn. (Family: Tiliaceae), called Phalsa in Hindi is an Indian medicinal plant used for a variety of therapeutic and nutritional uses. The root bark of the plant is traditionally used in rheumatism (painful chronic inflammatory condition). The present study demonstrates the analgesic and anti-inflammatory activity of root bark of G. asiatica in rodents. The methanolic extract of Grewia asiatica (MEGA) and aqueous extract of Grewia asiatica (AEGA) of the bark were prepared and subjected to phytochemical tests and pharmacological screening for analgesic and anti-inflammatory effect in rodents. Analgesic effect was studied using acetic acid-induced writhing in mice and hot plate analgesia in rats while anti-inflammatory activity was investigated using carrageenan-induced paw oedema in rats. The MEGA or AEGA was administered orally in doses of 200 and 400 mg/kg/day of body weight. Data were analysed by one-way analysis of variance followed by Dunnett's test. The extracts showed a significant inhibition of writhing response and increase in hot plate reaction time and also caused a decrease in paw oedema. The effects were comparable with the standard drugs used. The present study indicates that root bark of G. asiatica exhibits peripheral and central analgesic effect and anti-inflammatory activity, which may be attributed to the various phytochemicals present in root bark of G. asiatica. 18. Ameliorative Activity of Ethanolic Extract of Artocarpus heterophyllus Stem Bark on Alloxan-induced Diabetic Rats. Science.gov (United States) Ajiboye, Basiru Olaitan; Adeleke Ojo, Oluwafemi; Adeyonu, Oluwatosin; Imiere, Oluwatosin; Emmanuel Oyinloye, Babatunji; Ogunmodede, Oluwafemi 2018-03-01 Purpose: Diabetes mellitus is one of the major endocrine disorders, characterized by impaired insulin action and deficiency. Traditionally, Artocarpus heterophyllus stem bark has been reputably used in the management of diabetes mellitus and its complications. The present study evaluates the ameliorative activity of ethanol extract of Artocarpus heterophyllus stem bark in alloxan-induced diabetic rats. Methods: Diabetes mellitus was induced by single intraperitoneal injection of 150 mg/kg body weight of alloxan and the animals were orally administered with 50, 100 and 150 mg/kg body weight ethanol extract of Artocarpus heterophyllus stem bark once daily for 21 days. Results: At the end of the intervention, diabetic control rats showed significant (pArtocarpus heterophyllus stem bark most especially at 150 mg/kg body weight which exhibited no significant (p>0.05) different with non-diabetic rats. Conclusion: The results suggest that ethanol extract of Artocarpus heterophyllus stem bark may be useful in ameliorating complications associated with diabetes mellitus patients. 19. Growth and Wood/Bark Properties of Abies faxoniana Seedlings as Affected by Elevated CO2 Institute of Scientific and Technical Information of China (English) Yun-Zhou Qiao; Yuan-Bin Zhang; Kai-Yun Wang; Qian Wang; Qi-Zhuo Tian 2008-01-01 Growth and wood and bark properties of Abies faxoniana seedlings after one year's exposure to elevated CO2 concentration (ambient + 350 (=1= 25) μmol/mol) under two planting densities (28 or 84 plants/mz) were investigated in closed-top chambers. Tree height, stem diameter and cross-sectional area, and total biomass were enhanced under elevated CO2 concentration, and reduced under high planting density. Most traits of stem bark were improved under elevated CO2 concentration and reduced under high planting density. Stem wood production was significantly increased in volume under elevated CO2 concentration under both densities, and the stem wood density decreased under elevated CO2 concentration and increased under high planting density. These results suggest that the response of stem wood and bark to elevated CO2 concentration is density dependent. This may be of great importance in a future CO2 enriched world in natural forests where plant density varies considerably. The results also show that the bark/wood ratio in diameter, stem cross-sectional area and dry weight are not proportionally affected by elevated CO2 concentration under the two contrasting planting densities. This indicates that the response magnitude of stem bark and stem wood to elevated CO2 concentration are different but their response directions are the same. 20. Paper production from wild dogwood (Cornus australis L. and the effect of bark on paper properties Directory of Open Access Journals (Sweden) Ayhan Gençer 2017-11-01 Full Text Available Generally bark has a negative effect pulp and paper properties. In this study, paper pulp and hand sheets were produced from Wild dogwood (Cornus australis L. using Kraft method. The cooking have been different conditions, chip / solution ratio 1/5, cooking temperature 170±2 °C by taking constant. Kraft method with the Na2S/NaOH, 18/20, 18/15, 18/10, 18/5 performed. Samples were used with and without bark in order to identify the negative impacts of the bark on pulp and paper production. In addition, it has been investigated whether the time of reaching the maximum temperature of K2 cooking is reduced from 120 minutes to 90 minutes, and the time and energy saving can be made. For all of the mechanical properties that were measured and pulp yield, the bark had a negative effect. But, this effect had not significant on mechanical properties at 95% significant level. On the other hand the bark had a negative effect on brightness and positive effects on opacity. These effects had significant at 95% significant level. 1. The interaction of Saccharomyces paradoxus with its natural competitors on oak bark Science.gov (United States) Kowallik, Vienna; Miller, Eric; Greig, Duncan 2015-01-01 The natural history of the model yeast Saccharomyces cerevisiae is poorly understood and confounded by domestication. In nature, S. cerevisiae and its undomesticated relative S. paradoxus are usually found on the bark of oak trees, a habitat very different from wine or other human fermentations. It is unclear whether the oak trees are really the primary habitat for wild yeast, or whether this apparent association is due to biased sampling. We use culturing and high-throughput environmental sequencing to show that S. paradoxus is a very rare member of the oak bark microbial community. We find that S. paradoxus can grow well on sterile medium made from oak bark, but that its growth is strongly suppressed when the other members of the community are present. We purified a set of twelve common fungal and bacterial species from the oak bark community and tested how each affected the growth of S. paradoxus in direct competition on oak bark medium at summer and winter temperatures, identifying both positive and negative interactions. One Pseudomonas species produces a diffusible toxin that suppresses S. paradoxus as effectively as either the whole set of twelve species together or the complete community present in nonsterilized oak medium. Conversely, one of the twelve species, Mucilaginibacter sp., had the opposite effect and promoted S. paradoxus growth at low temperatures. We conclude that, in its natural oak tree habitat, S. paradoxus is a rare species whose success depends on the much more abundant microbial species surrounding it. PMID:25706044 2. Antivenom potential of ethanolic extract of Cordia macleodii bark against Naja venom. Science.gov (United States) Soni, Pranay; Bodakhe, Surendra H 2014-05-01 To evaluate the antivenom potential of ethanolic extract of bark of Cordia macleodii against Naja venom induced pharmacological effects such as lethality, hemorrhagic lesion, necrotizing lesion, edema, cardiotoxicity and neurotoxicity. Wistar strain rats were challenged with Naja venom and treated with the ethanolic extract of Cordia macleodii bark. The effectiveness of the extract to neutralize the lethalities of Naja venom was investigated as recommended by WHO. At the dose of 400 and 800 mg/kg ethanolic extract of Cordia macleodii bark significantly inhibited the Naja venom induced lethality, hemorrhagic lesion, necrotizing lesion and edema in rats. Ethanolic extract of Cordia macleodii bark was effective in neutralizing the coagulant and defibrinogenating activity of Naja venom. The cardiotoxic effects in isolated frog heart and neurotoxic activity studies on frog rectus abdominus muscle were also antagonized by ethanolic extract of Cordia macleodii bark. It is concluded that the protective effect of extract of Cordia macleodii against Naja venom poisoning may be mediated by the cardiotonic, proteolysin neutralization, anti-inflammatory, antiserotonic and antihistaminic activity. It is possible that the protective effect may also be due to precipitation of active venom constituents. 3. Equations of bark thickness and volume profiles at different heights with easy-measurement variables Energy Technology Data Exchange (ETDEWEB) Cellini, J. M.; Galarza, M.; Burns, S. L.; Martinez-Pastur, G. J.; Lencinas, M. V. 2012-11-01 The objective of this work was to develop equations of thickness profile and bark volume at different heights with easy-measurement variables, taking as a study case Nothofagus pumilio forests, growing in different site qualities and growth phases in Southern Patagonia. Data was collected from 717 harvested trees. Three models were fitted using multiple, non-lineal regression and generalized linear model, by stepwise methodology, iteratively reweighted least squares method for maximum likelihood estimation and Marquardt algorithm. The dependent variables were diameter at 1.30 m height (DBH), relative height (RH) and growth phase (GP). The statistic evaluation was made through the adjusted determinant coefficient (r2-adj), standard error of the estimation (SEE), mean absolute error and residual analysis. All models presented good fitness with a significant correlation with the growth phase. A decrease in the thickness was observed when the relative height increase. Moreover, a bark coefficient was made to calculate volume with and without bark of individual trees, where significant differences according to site quality of the stands and DBH class of the trees were observed. It can be concluded that the prediction of bark thickness and bark coefficient is possible using DBH, height, site quality and growth phase, common and easy measurement variables used in forest inventories. (Author) 23 refs. 4. Removal of Murexide from Aqueous Solution Using Pomegranate bark as adsorbent International Nuclear Information System (INIS) Ishaq, M.I.; Shakirullah, M.; Ahmad, I.; Sultan, S.; Saeed, K. 2012-01-01 The adsorption of Murexide from aqueous solution onto the Pomegranate bark was investigated at room temperature. The morphological study presented that the HNO/sub 3/ treatment increased the surface roughness of the adsorbent. EDX studies show that the untreated Pomegranate bark had carbon content (52 wt %) and oxygen content (44 wt %) while in the case of HNO/sub 3/ treated pomegranate bark, the carbon quantity decreased (42 wt %) and oxygen quantity (52 wt %) increased. The results showed that the adsorption of Murexide dye from aqueous solution was increased as increased the adsorption time and then equilibrium was reached after 30 min of adsorption time. The HNO/sub 3/ treated Pomegranate bark adsorbed high quantity of Murexide (1.7 mg/g) as compared to untreated Pomegranate bark (0.73 mg/g), which might be due to increased surface roughness. The adsorption of Murexide was also studied at different pH, which presented that low pH was favorable for the removal of color material from aqueous solution. (author) 5. Language specific bootstraps for UG categories NARCIS (Netherlands) van Kampen, N.J. 2005-01-01 This paper argues that the universal categories N/V are not applied to content words before the grammatical markings for reference D(eterminers) and predication I(nflection) have been acquired (van Kampen, 1997, contra Pinker, 1984). Child grammar starts as proto-grammar with language-specific 6. Quantum logic in dagger kernel categories NARCIS (Netherlands) Heunen, C.; Jacobs, B.P.F. 2009-01-01 This paper investigates quantum logic from the perspective of categorical logic, and starts from minimal assumptions, namely the existence of involutions/daggers and kernels. The resulting structures turn out to (1) encompass many examples of interest, such as categories of relations, partial 7. Quantum logic in dagger kernel categories NARCIS (Netherlands) Heunen, C.; Jacobs, B.P.F.; Coecke, B.; Panangaden, P.; Selinger, P. 2011-01-01 This paper investigates quantum logic from the perspective of categorical logic, and starts from minimal assumptions, namely the existence of involutions/daggers and kernels. The resulting structures turn out to (1) encompass many examples of interest, such as categories of relations, partial 8. New Evidence for Infant Colour Categories Science.gov (United States) Franklin, Anna; Davies, Ian R. L. 2004-01-01 Bornstein, Kessen, and Weiskopf (1976) reported that pre-linguistic infants perceive colour categorically for primary boundaries: Following habituation, dishabituation only occurred if the test stimulus was from a different adult category to the original. Here, we replicated this important study and extended it to include secondary boundaries,… 9. Ethnicity in censuses: Changeable and inconstant category Directory of Open Access Journals (Sweden) Mrđen Snježana 2002-01-01 Full Text Available The issue of ethnicity was set in all censuses of SFRY, as well as in the first censuses in countries that were created after its disintegration. When analyzing the censuses it can be concluded that it is a changeable category. Not only was the manner of forming the question in censuses changing, but also the number of categories of nationality and their order in published census' results. It depended on state policy and the political situation preceding the censuses. Since the answer on the issues of ethnicity is a subjective criterion, and it was written down according to the freely declared statement of the residents, guaranteed by the Constitution. It has often happened that same individuals have declared themselves differently from one census to another, and also some categories of ethnicity have vanished and some others were created. Although in SFRY nations and ethnicities were equal, still indirectly in published results, existence of these two categories was indicated. But, in newly created countries, the manner of forming the question of ethnicity was changed, their number and order were also changed and the notion of 'minority' was again introduced, indicating, beyond doubt, a different status of nationality (except the majority from the one in the former Yugoslavia. 10. 40 CFR 2.105 - Exemption categories. Science.gov (United States) 2010-07-01 ... the mandatory disclosure requirements of 5 U.S.C. 552(a): (1)(i) Specifically authorized under... enforcement investigations or prosecutions if such disclosure could reasonably be expected to risk... for Disclosure of Records Under the Freedom of Information Act § 2.105 Exemption categories. (a) The... 11. Reliability of Multi-Category Rating Scales Science.gov (United States) Parker, Richard I.; Vannest, Kimberly J.; Davis, John L. 2013-01-01 The use of multi-category scales is increasing for the monitoring of IEP goals, classroom and school rules, and Behavior Improvement Plans (BIPs). Although they require greater inference than traditional data counting, little is known about the inter-rater reliability of these scales. This simulation study examined the performance of nine… 12. 21 CFR 330.5 - Drug categories. Science.gov (United States) 2010-04-01 ... 21 Food and Drugs 5 2010-04-01 2010-04-01 false Drug categories. 330.5 Section 330.5 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) DRUGS FOR HUMAN...) Stimulants. (r) Antitussives. (s) Allergy treatment products. (t) Cold remedies. (u) Antirheumatic products... 13. Shape configuration and category-specificity DEFF Research Database (Denmark) Gerlach, Christian; Law, Ian; Paulson, Olaf B 2006-01-01 in difficult object decision tasks, which is also found in the present experiments with outlines, is reversed when the stimuli are fragmented. This interaction between category (natural versus artefacts) and stimulus type (outlines versus fragmented forms) is in accordance with predictions derived from... 14. Structural similarity and category-specificity DEFF Research Database (Denmark) Gerlach, Christian; Law, Ian; Paulson, Olaf B 2004-01-01 It has been suggested that category-specific recognition disorders for natural objects may reflect that natural objects are more structurally (visually) similar than artefacts and therefore more difficult to recognize following brain damage. On this account one might expect a positive relationshi... 15. Ontological semantics in modified categorial grammar DEFF Research Database (Denmark) Szymczak, Bartlomiej Antoni 2009-01-01 Categorial Grammar is a well established tool for describing natural language semantics. In the current paper we discuss some of its drawbacks and how it could be extended to overcome them. We use the extended version for deriving ontological semantics from text. A proof-of-concept implementation... 16. Semantic category interference in overt picture naming NARCIS (Netherlands) Maess, B.; Friederici, A.D.; Damian, M.F.; Meyer, A.S.; Levelt, W.J.M. 2002-01-01 The study investigated the neuronal basis of the retrieval of words from the mental lexicon. The semantic category interference effect was used to locate lexical retrieval processes in time and space. This effect reflects the finding that, for overt naming, volunteers are slower when naming pictures 17. Multimedia Category Preferences of Working Engineers Science.gov (United States) Baukal, Charles E., Jr.; Ausburn, Lynna J. 2016-01-01 Many have argued for the importance of continuing engineering education (CEE), but relatively few recommendations were found in the literature for how to use multimedia technologies to deliver it most effectively. The study reported here addressed this gap by investigating the multimedia category preferences of working engineers. Four categories… 18. Microarray analysis of kiwifruit (Actinidia chinensis bark following challenge by the sucking insect Hemiberlesia lataniae (Hemiptera: Diaspididae Directory of Open Access Journals (Sweden) M. Garry Hill 2016-03-01 Full Text Available Both commercial and experimental genotypes of kiwifruit (Actinidia spp. exhibit large differences in response to insect pests. An understanding of the vine's physiological response to insect feeding and its genetic basis will be important in assisting the development of varieties with acceptable levels of pest resistance. This experiment describes transcriptome changes observed in the bark of kiwifruit 2 and 7 days after the commencement of feeding by the armored scale insect pest, Hemiberlesia lataniae. Using a cDNA microarray consisting of 17,512 unigenes, we measured transcriptome changes and analyzed these into functional ontology categories using MapMan. Results are available in the GEO database GSE73922 and are described fully in Ref. Hill et al. (2015 [1]. After 7 days, transcripts associated with photosynthesis were down-regulated and secondary metabolism was up-regulated. Differential expression of transcripts associated with stress response was consistent with a defense response involving both effector and herbivore-triggered immunities, with predominant involvement of the salicylic acid phytohormonal pathway. This hypothesis was supported by the results of two laboratory experiments. The methods described here could be further adapted and applied to the study of plant responses to a wide range of sessile sucking pests. 19. Focal colors across languages are representative members of color categories. Science.gov (United States) Abbott, Joshua T; Griffiths, Thomas L; Regier, Terry 2016-10-04 Focal colors, or best examples of color terms, have traditionally been viewed as either the underlying source of cross-language color-naming universals or derived from category boundaries that vary widely across languages. Existing data partially support and partially challenge each of these views. Here, we advance a position that synthesizes aspects of these two traditionally opposed positions and accounts for existing data. We do so by linking this debate to more general principles. We show that best examples of named color categories across 112 languages are well-predicted from category extensions by a statistical model of how representative a sample is of a distribution, independently shown to account for patterns of human inference. This model accounts for both universal tendencies and variation in focal colors across languages. We conclude that categorization in the contested semantic domain of color may be governed by principles that apply more broadly in cognition and that these principles clarify the interplay of universal and language-specific forces in color naming. 20. Typicality Mediates Performance during Category Verification in Both Ad-Hoc and Well-Defined Categories Science.gov (United States) Sandberg, Chaleece; Sebastian, Rajani; Kiran, Swathi 2012-01-01 Background: The typicality effect is present in neurologically intact populations for natural, ad-hoc, and well-defined categories. Although sparse, there is evidence of typicality effects in persons with chronic stroke aphasia for natural and ad-hoc categories. However, it is unknown exactly what influences the typicality effect in this… 1. Uncovering Contrast Categories in Categorization with a Probabilistic Threshold Model Science.gov (United States) Verheyen, Steven; De Deyne, Simon; Dry, Matthew J.; Storms, Gert 2011-01-01 A contrast category effect on categorization occurs when the decision to apply a category term to an entity not only involves a comparison between the entity and the target category but is also influenced by a comparison of the entity with 1 or more alternative categories from the same domain as the target. Establishing a contrast category effect… 2. Use of 60Co gamma radiation in increased levels of total polyphenol extracts of bark of Schinus terebinthifolius Raddi International Nuclear Information System (INIS) Santos, Gustavo H.F.; Silva, Edvane B.; Silva, Hianna A.M.F.; Amorin, Elba L.C.; Peixoto, Tadeu J.S.; Yara, Ricardo; Lima, Claudia S.A. 2013-01-01 Schinus terebinthifolius Raddi (Anacardiaceae) is well known as sources of phenolic compounds. Known as mastic pepper, red pepper tree is a plant native to midsize coast of Brazil. Some of its structures have proven antibacterial, anti-inflammatory, antifungal and healing. The aim of this study was to evaluate the difference in the phenol contents of crude extracts that were measured after irradiating the barks of S. terebinthifolius using gamma radiation from 60 Co. The crude extract were divided into a control group and eight experimental groups, which were separated based on the doses of gamma radiation to which they were exposed: 2.5, 5.0, 7.5, 10.0, 12.5, 15.0, 20.0 and 50.0 kGy (Assays were performed in triplicate). The results allow observe that gamma radiation promoted in extracts of bark of S. terebinthifolius, many percents increase (p> 0.05) of total polyphenol content between 2.5 kGy (41.93%) and 50.0 kGy (44.52%) compared to 0 kGy (30.07%), with the same gradual to 10.0 kGy, and reaching peak maximum at 10.0 kGy (68.44%). However, the study puts the process of gamma radiation from 60 Co as an alternative significant increase in the percentage of some natural substances of plant material, and subsequently contribute to the augmentation of various therapeutic applications to which they are assigned. (author) 3. Use of {sup 60}Co gamma radiation in increased levels of total polyphenol extracts of bark of Schinus terebinthifolius Raddi Energy Technology Data Exchange (ETDEWEB) Santos, Gustavo H.F.; Silva, Edvane B.; Silva, Hianna A.M.F.; Amorin, Elba L.C.; Peixoto, Tadeu J.S.; Yara, Ricardo; Lima, Claudia S.A., E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil) 2013-07-01 Schinus terebinthifolius Raddi (Anacardiaceae) is well known as sources of phenolic compounds. Known as mastic pepper, red pepper tree is a plant native to midsize coast of Brazil. Some of its structures have proven antibacterial, anti-inflammatory, antifungal and healing. The aim of this study was to evaluate the difference in the phenol contents of crude extracts that were measured after irradiating the barks of S. terebinthifolius using gamma radiation from {sup 60}Co. The crude extract were divided into a control group and eight experimental groups, which were separated based on the doses of gamma radiation to which they were exposed: 2.5, 5.0, 7.5, 10.0, 12.5, 15.0, 20.0 and 50.0 kGy (Assays were performed in triplicate). The results allow observe that gamma radiation promoted in extracts of bark of S. terebinthifolius, many percents increase (p> 0.05) of total polyphenol content between 2.5 kGy (41.93%) and 50.0 kGy (44.52%) compared to 0 kGy (30.07%), with the same gradual to 10.0 kGy, and reaching peak maximum at 10.0 kGy (68.44%). However, the study puts the process of gamma radiation from {sup 60}Co as an alternative significant increase in the percentage of some natural substances of plant material, and subsequently contribute to the augmentation of various therapeutic applications to which they are assigned. (author) 4. 78 FR 76060 - Pacific Ocean off the Pacific Missile Range Facility at Barking Sands, Island of Kauai, Hawaii... Science.gov (United States) 2013-12-16 ... the Pacific Missile Range Facility at Barking Sands, Island of Kauai, Hawaii; Danger Zone. AGENCY: U.S... Barking Sands, Island of Kauai, Hawaii. The U.S. Navy conducts weapon systems testing and other military... Sands, Island of Kauai, Hawaii. The proposed rule was published in the July 1, 2013 issue of the Federal... 5. 33 CFR 165.1406 - Safety Zone: Pacific Missile Range Facility (PMRF), Barking Sands, Island of Kauai, Hawaii. Science.gov (United States) 2010-07-01 ... Range Facility (PMRF), Barking Sands, Island of Kauai, Hawaii. 165.1406 Section 165.1406 Navigation and...), Barking Sands, Island of Kauai, Hawaii. (a) Location. The following area is established as a safety zone during launch operations at PMRF, Kauai, Hawaii: The waters bounded by the following coordinates: (22°01... 6. Predicting live and dead basal area in bark beetle-affected forests from discrete-return LiDAR Science.gov (United States) Andrew T. Hudak; Ben Bright; Jose Negron; Robert McGaughey; Hans-Erik Andersen; Jeffrey A. Hicke 2012-01-01 Recent bark beetle outbreaks in western North America have been widespread and severe. High tree mortality due to bark beetles affects the fundamental ecosystem processes of primary production and decomposition that largely determine carbon balance (Kurz et al. 2008, Pfeifer et al. 2011, Hicke et al. 2012). Forest managers need accurate data on beetle-induced tree... 7. Evaluation for substitution of stem bark with small branches of Myrica esculenta for medicinal use – A comparative phytochemical study Directory of Open Access Journals (Sweden) Bhavana Srivastava 2016-10-01 Conclusion: Similarities in phytochemical analysis and HPTLC profile of various extracts suggests that small branches may be used in place of stem bark. The study provides the base for further study to use small branches as a substitute of stem bark of M. esculenta. 8. Pheromone-mediated mate location and discrimination by two syntopic sibling species of Dendroctonus bark beetles in Chiapas, Mexico Science.gov (United States) Alicia Nino-Dominguez; Brian T. Sullivan; Jose H. Lopez-Urbina; Jorge E. Macias-Samano 2015-01-01 Where their geographic and host ranges overlap, sibling species of tree-killing bark beetles may simultaneously attack and reproduce on the same hosts. However, sustainability of these potentially mutually beneficial associations demands effective prezygotic reproductive isolation mechanisms between the interacting species. The pine bark beetle, Dendroctonus... 9. Definition of spatial patterns of bark beetle Ips typographus (L.) outbreak spreading in Tatra Mountains (Central Europe), using GIS Science.gov (United States) Rastislav Jakus; Wojciech Grodzki; Marek Jezik; Marcin Jachym 2003-01-01 The spread of bark beetle outbreaks in the Tatra Mountains was explored by using both terrestrial and remote sensing techniques. Both approaches have proven to be useful for studying spatial patterns of bark beetle population dynamics. The terrestrial methods were applied on existing forestry databases. Vegetation change analysis (image differentiation), digital... 10. Effects of symbiotic bacteria and tree chemistry on the growth and reproduction of bark beetle fungal symbionts Science.gov (United States) A.S. Adams; C.R. Currie; Y. Cardoza; K.D. Klepzig; K.F. Raffa 2009-01-01 Bark beetles are associated with diverse assemblages of microorganisms, many of which affect their interactions with host plants and natural enemies. We tested how bacterial associates of three bark beetles with various types of host relationships affect growth and reproduction of their symbiotic fungi. Fungi were exposed to volatiles... 11. Landsat time series and lidar as predictors of live and dead basal area across five bark beetle-affected forests Science.gov (United States) Benjamin C. Bright; Andrew T. Hudak; Robert E. Kennedy; Arjan J. H. Meddens 2014-01-01 Bark beetle-caused tree mortality affects important forest ecosystem processes. Remote sensing methodologies that quantify live and dead basal area (BA) in bark beetle-affected forests can provide valuable information to forest managers and researchers. We compared the utility of light detection and ranging (lidar) and the Landsat-based detection of trends in... 12. Chromatographic fingerprint analysis of yohimbe bark and related dietary supplements using UHPLC/UV/MS. Science.gov (United States) Sun, Jianghao; Chen, Pei 2012-03-05 A practical ultra high-performance liquid chromatography (UHPLC) method was developed for fingerprint analysis of and determination of yohimbine in yohimbe barks and related dietary supplements. Good separation was achieved using a Waters Acquity BEH C(18) column with gradient elution using 0.1% (v/v) aqueous ammonium hydroxide and 0.1% ammonium hydroxide in methanol as the mobile phases. The study is the first reported chromatographic method that separates corynanthine from yohimbine in yohimbe bark extract. The chromatographic fingerprint analysis was applied to the analysis of 18 yohimbe commercial dietary supplement samples. Quantitation of yohimbine, the traditional method for analysis of yohimbe barks, were also performed to evaluate the results of the fingerprint analysis. Wide variability was observed in fingerprints and yohimbine content among yohimbe dietary supplement samples. For most of the dietary supplements, the yohimbine content was not consistent with the label claims. Copyright © 2011. Published by Elsevier B.V. 13. Microscopic and UPLC-UV-MS analyses of authentic and commercial yohimbe (Pausinystalia johimbe) bark samples. Science.gov (United States) Raman, Vijayasankar; Avula, Bharathi; Galal, Ahmed M; Wang, Yan-Hong; Khan, Ikhlas A 2013-01-01 Yohimbine is the major alkaloid found in the stem bark of yohimbe, Pausinystalia johimbe (Rubiaceae), an evergreen tree native to Africa. The objectives of the current study were to provide a detailed anatomy of yohimbe bark, as well as to determine the quantity of yohimbine in the raw yohimbe products sold online. Twelve commercial raw materials of yohimbe were analyzed by microscopic and ultra performance liquid chromatography-UV-MS methods. The study revealed that three samples were probably adulterated and four other samples contained various levels of impurities. Yohimbine was not detected in one sample, whereas its presence in other samples was found to be in the range 0.1-0.91%. The present work also provides a detailed anatomy of the stem bark of yohimbe, with light and scanning electron microscopy images, for proper identification and authentication. 14. Copper, nickel and lead in lichen and tree bark transplants over different periods of time Energy Technology Data Exchange (ETDEWEB) Baptista, Mafalda S. [CIIMAR, Rua dos Bragas, 289, 4050-123 Porto (Portugal)], E-mail: [email protected]; Vasconcelos, M. Teresa S.D. [CIIMAR, Rua dos Bragas, 289, 4050-123 Porto (Portugal); Chemistry Department, Faculty of Sciences, University of Porto, Rua do Campo Alegre, 687, 4169-071 Porto (Portugal)], E-mail: [email protected]; Cabral, Joao Paulo [CIIMAR, Rua dos Bragas, 289, 4050-123 Porto (Portugal); Botany Department, Faculty of Sciences, University of Porto, Rua do Campo Alegre, 1191, 4150-181 Porto (Portugal)], E-mail: [email protected]; Freitas, M. Carmo [ITN - Technological and Nuclear Institute, Reactor E.N. 10, 2686-953 Sacavem (Portugal)], E-mail: [email protected]; Pacheco, Adriano M.G. [CVRM-IST - Technical University of Lisbon, Avenida Rovisco Pais, 1, 1049-001 Lisbon (Portugal)], E-mail: [email protected] 2008-01-15 This work aimed at comparing the dynamics of atmospheric metal accumulation by the lichen Flavoparmelia caperata and bark of Platanus hybrida over different periods of time. Transplants were exposed in three Portuguese coastal cities. Samples were retrieved (1) every 2 months (discontinuous exposure), or (2) after 2-, 4-, 6-, 8- and 10-month periods (continuous exposure), and analysed for Cu, Ni and Pb. Airborne accumulation of metals was essentially independent of climatic factors. For both biomonitors [Pb] > [Ni] > [Cu] but Pb was the only element for which a consistent pattern of accumulation was observed, with the bark outperforming the lichen. The longest exposure periods hardly ever corresponded to the highest accumulation. This might have been partly because the biomonitors bound and released metals throughout the exposure, each with its own dynamics of accumulation, but both according to the environmental metal availability. - Lichen and tree bark have distinct dynamics of airborne metal accumulation. 15. Copper, nickel and lead in lichen and tree bark transplants over different periods of time International Nuclear Information System (INIS) Baptista, Mafalda S.; Vasconcelos, M. Teresa S.D.; Cabral, Joao Paulo; Freitas, M. Carmo; Pacheco, Adriano M.G. 2008-01-01 This work aimed at comparing the dynamics of atmospheric metal accumulation by the lichen Flavoparmelia caperata and bark of Platanus hybrida over different periods of time. Transplants were exposed in three Portuguese coastal cities. Samples were retrieved (1) every 2 months (discontinuous exposure), or (2) after 2-, 4-, 6-, 8- and 10-month periods (continuous exposure), and analysed for Cu, Ni and Pb. Airborne accumulation of metals was essentially independent of climatic factors. For both biomonitors [Pb] > [Ni] > [Cu] but Pb was the only element for which a consistent pattern of accumulation was observed, with the bark outperforming the lichen. The longest exposure periods hardly ever corresponded to the highest accumulation. This might have been partly because the biomonitors bound and released metals throughout the exposure, each with its own dynamics of accumulation, but both according to the environmental metal availability. - Lichen and tree bark have distinct dynamics of airborne metal accumulation 16. In vitro studies on the hypoglycemic potential of Ficus racemosa stem bark. Science.gov (United States) Ahmed, Faiyaz; Urooj, Asna 2010-02-01 Medicinal plants have been reported to play an important role in modulating glycemic responses and have preventive and therapeutic implications. Several mechanisms have been proposed for the antidiabetic effect of medicinal plants such as inhibition of carbohydrate-metabolizing enzymes, manipulation of glucose transporters, beta-cell regeneration and enhancing insulin-releasing activity. The present investigation evaluated the possible mechanism of action through which Ficus racemosa stem bark (Moraceae) exerts its hypoglycemic effect using suitable in vitro techniques. Ficus racemosa bark (FRB) exhibited significantly higher (P FRB, as reflected by a significantly lower (P system containing FRB compared to the control and acarbose. Furthermore, FRB significantly increased (P < or = 0.01) the rate of glucose transport across the yeast cell membrane and also in isolated rat hemi-diaphragm. The findings indicate F. racemosa bark to possess strong hypoglycemic effect and hence can be utilized as an adjunct in the management of diabetes mellitus. 17. Mimusops elengi bark extract mediated green synthesis of gold nanoparticles and study of its catalytic activity Science.gov (United States) Majumdar, Rakhi; Bag, Braja Gopal; Ghosh, Pooja 2016-04-01 The bark extract of Mimusops elengi is rich in different types of plant secondary metabolites such as flavonoids, tannins, triterpenoids and saponins. The present study shows the usefulness of the bark extract of Mimusops elengi for the green synthesis of gold nanoparticles in water at room temperature under very mild conditions. The synthesis of the gold nanoparticles was complete within a few minutes without any extra stabilizing or capping agents and the polyphenols present in the bark extract acted as both reducing as well as stabilizing agents. The synthesized colloidal gold nanoparticles were characterized by HRTEM, surface plasmon resonance spectroscopy and X-ray diffraction studies. The synthesized gold nanoparticles have been used as an efficient catalyst for the reduction of 3-nitrophenol and 4-nitrophenol to their corresponding aminophenols in water at room temperature. 18. Acidity of tree bark as a bioindicator of forest pollution in southern Poland Energy Technology Data Exchange (ETDEWEB) Grodznska, K 1976-01-01 PH values and buffering capacity were determined for bark samples of 5 deciduous trees (oak, alder, hornbeam, ash, linden), one shrub (hazel) and one coniferous tree (scots pine) in the Cracow industrial region (southern Poland) and for comparison in the Bialowieza Forest (north-eastern Poland). The correlation was found between acidification of tree bark and air pollution by SO/sub 2/ in these areas. All trees showed the least acidic reaction in the control area (Bialowieza Forest), more acidic in Niepolomice Forest and the most acidic in the center of Cracow city. The buffering capacity of the bark against alkali increased with increasing air pollution. The seasonal fluctuations of pH values is recommended as a sensitive and simple indicator of air pollution. 19. Monitoring atmospheric nitrogen pollution in Guiyang (SW China) by contrasting use of Cinnamomum Camphora leaves, branch bark and bark as biomonitors. Science.gov (United States) Xu, Yu; Xiao, Huayun; Guan, Hui; Long, Chaojun 2018-02-01 Moss (as a reference material) and camphor (Cinnamomum Camphora) leaf, branch bark and bark samples were systematically collected across an urban-rural gradient in Guiyang (SW China) to determine the efficacy of using these bio-indicators to evaluate nitrogen (N) pollution. The tissue N concentrations (0.13%-2.70%) and δ 15 N values (-7.5‰ to +9.3‰) of all of these bio-indicators exhibited large spatial variations, as they recorded higher values in urban areas that quickly decreased with distance from the city center; moreover, both soil N concentrations and soil δ 15 N values were found no significant differences within each 6 km from the urban to the rural area. This not only suggests that the different N uptake strategies and variety of N responses of these bio-indicators can be reflected by their different susceptibilities to variations in N deposition but also reveals that they are able to indicate that urban N deposition is mostly from traffic and industry (NO x -N), whereas rural N deposition is mainly from agriculture (NH x -N). Compared to previously collected urban moss and camphor leaf samples, the significantly increased δ 15 N values in current urban moss and camphor leaf samples further indicate a greater contribution of NO x -N than NH x -N to urban N deposition. The feasibility of using the N concentrations and δ 15 N values of branch bark and bark as biomarkers of N deposition thus was further confirmed through the comparative use of these bio-indicators. It can be concluded that vascular plant leaves, branch bark and bark can be used as useful biomonitoring tools for evaluating atmospheric N pollution. For further study, quantitative criteria for the practical use of these bio-indicators in response to N deposition should be developed and the differences in the δ 15 N values of different plant parts should also be considered, particularly in urban environments that are severely disrupted by atmospheric pollution. Copyright © 2017 20. Correspondence between Grammatical Categories and Grammatical Functions in Chinese. Science.gov (United States) Tan, Fu 1993-01-01 A correspondence is shown between grammatical categories and grammatical functions in Chinese. Some syntactic properties distinguish finite verbs from nonfinite verbs, nominals from other categories, and verbs from other categories. (Contains seven references.) (LB) 1. The application of tree bark as bio-indicator for the assessment of Cr(VI) in air pollution International Nuclear Information System (INIS) Mandiwana, Khakhathi L.; Resane, Tabby; Panichev, Nikolay; Ngobeni, Prince 2006-01-01 The impact of a chromium smelter on pollution was evaluated by determining Cr(VI) in topsoil, grass and tree bark by electhrothermal atomic absorption spectrometry (ETAAS). It was found that bark reflected the levels of air pollution better than soil and grass due to its high accumulative ability of Cr(VI). The tree bark was contaminated with Cr(VI) by a factor of 9 than in soil. It is therefore suggested that the bark be used as an indicator of air pollution for long-term exposure. The concentration of Cr(VI) in the bark was always a fraction of the total concentration of Cr and ranges between 1.6 and 3%. The method used in the preparation of samples was validated by the analysis of certified reference materials 2. Black poplar-tree (Populus nigra L.) bark as an alternative indicator of urban air pollution by chemical elements International Nuclear Information System (INIS) Berlizov, A.N.; Malyuk, I.A.; Tryshyn, V.V. 2008-01-01 Capabilities of black poplar-tree (Populus nigra L.) bark as a biomonitor of atmospheric air pollution by chemical elements were tested against epiphytic lichens Xanthoria parietina (L.) and Physcia adscendens (Fr.). Concentrations of 40 macro and trace elements were determined using epicadmium and instrumental NAA. The data obtained were processed using non-parametric tests. A good correlation was found between concentrations of majority of elements in bark and lichens. On the accumulation capability bark turned out to be competitive with both lichens examined. The main inorganic components of black poplar-tree bark were revealed. A substrate influence on the concentrations of some elements in epiphytic lichens was established. An optimized procedure of bark pre-irradiation treatment was suggested. (author) 3. Investigation of Solid Energy Potential of Wood and Bark Obtained from Four Clones of a 2-Year Old Goat Willow International Nuclear Information System (INIS) Han, Sim-Hee; Shin, Soo-Jeong 2014-01-01 To investigate the solid raw material characteristics of willow (Salix caprea) bark and woody core, this study analyzed overall chemical composition, monosaccharide composition, ash content, and main ash composition of both tree components. Significant differences were observed between the two in terms of chemical composition, carbohydrate composition, ash content, and major inorganics. The ash content in bark was 3.8–4.7%, compared with 0.6–1.1% in the woody core. Polysaccharide content in the woody core was 62.8–70.6% but was as low as 44.1–47.6% in the bark. The main hemicelluloses consisting of monosaccharides were xylose in the case of the woody core, and xylose, galactose, and arabinose in the case of bark. Woody core biomass of willow provides superior solid fuel raw material, as compared with bark biomass, with higher heating values, less ash content, and less slagging-causing material. 4. Effects of bark flour of Passiflora edulis on food intake, body weight and behavioral response of rats Directory of Open Access Journals (Sweden) Dandara A.F. Figueiredo Full Text Available ABSTRACT Effects of treatment with the bark flour of Passiflora edulis Sims, Passifloraceae, were evaluated. Adult male Wistar rats were treated for 30 days (130 mg/kg, p.o. with the albedo flour, flavedo and full bark of P. edulis, corresponding to albedo associated with flavedo. Behavioral response observed after treatment with bark flour P. edulis showed sedative effects by the reduction of exploratory activity and increased duration of immobility in the open field test for the group of animals that received the albedo flour associated with the flavedo. Sedative effects were observed in the absence of motor incoordination or muscle relaxation. Food intake of experimental animals was not changed, but the weight gain was decreased both in animals that received only albedo flour, and in those who received the full bark flour. The full bark flour of Passiflora showed sedative effects, without anxiolytic effect detectable and muscle relaxation or motor incoordination, and reduces body weight gain. 5. Irradiation Effect on the antioxidant properties, anti-microbial and cytoprotective of the bark of Punica granatum International Nuclear Information System (INIS) Sanaa, Chahnez 2013-01-01 The bark of pomegranate has been used for some years to treat various health problems . Several studies have focused on specifying these problems, including antibacterial , antioxidant and cytoprotective . The use of pomegranate rind powder is an effective treatment against gastric ulcer and intestines and to strengthen the wall of the gastrointestinal tract. In this work, we studied the effects of gamma irradiation on the type antibacterial, anti-ulcer and bark grenade. This study was conducted on powdered pomegranate bark irradiated by applying decreasing radiation doses from 25kGy to 1.25KGy. All of our results shows that irradiation with a low degree improves the effectiveness of pomegranate bark for the treatment of gastric ulcer , however high degree irradiation enhances the antibacterial activity of bark pomegranate against Staphylococcus aureus. 6. Investigation of Solid Energy Potential of Wood and Bark Obtained from Four Clones of a 2-Year Old Goat Willow Energy Technology Data Exchange (ETDEWEB) Han, Sim-Hee [Department of Forest Genetic Resources, Korea Forest Research Institute, Suwon (Korea, Republic of); Shin, Soo-Jeong, E-mail: [email protected] [Department of Wood and Paper Science, Chungbuk National University, Cheongju (Korea, Republic of) 2014-01-31 To investigate the solid raw material characteristics of willow (Salix caprea) bark and woody core, this study analyzed overall chemical composition, monosaccharide composition, ash content, and main ash composition of both tree components. Significant differences were observed between the two in terms of chemical composition, carbohydrate composition, ash content, and major inorganics. The ash content in bark was 3.8–4.7%, compared with 0.6–1.1% in the woody core. Polysaccharide content in the woody core was 62.8–70.6% but was as low as 44.1–47.6% in the bark. The main hemicelluloses consisting of monosaccharides were xylose in the case of the woody core, and xylose, galactose, and arabinose in the case of bark. Woody core biomass of willow provides superior solid fuel raw material, as compared with bark biomass, with higher heating values, less ash content, and less slagging-causing material. 7. Gas release and leachates at bark storage: Laboratory and field studies. Final report International Nuclear Information System (INIS) Jirjis, Raida; Andersson, Paal; Aronsson, Paer 2005-01-01 Large volumes of bark are produced as a by-product from saw mills and pulp and paper industry all year round in Sweden. Most of the bark is used as a biofuel. Due to the uneven demand for the fuel during the year, bark has to be often stored for a few months. Storage normally takes place outdoors in fairly large piles. A number of biological and chemical processes are known to occur during storage. These processes can lead to the emission and leakage of environmentally unaccepted products which can also affect working environment. The aim of this project was to evaluate the outcome of some of these processes and to asses its effect on working environment as well as the surrounding environment. This study investigates the storage of fresh bark from pine and spruce in laboratory scale experiments and a large scale storage trial. Results of the analyses of bark material, before and after storage, and the chemical constituents of the released gases and leached material are presented. Estimation of the total amounts that can be released in gas form or leached out from bark piles during storage, and possible environmental consequences are discussed. Conclusions and some practical suggestion concerning bark storage are given in this report. The laboratory experiment involved storage of fresh bark in a 34 litres cylindrical chamber at room temperature (RT) or heated to an average of 55 deg C. The chambers were designed to provide gas samples during emissions experiment and allow irrigation during leakage experiments. Sampling of the released gases (using Tenax-adsorbent) was performed during two or three weeks of storage for spruce and pine bark respectively. The total volatile organic compounds (VOC) and individual monoterpenes were determined. Changes in the chemical constituents of bark during storage were studied using different extraction methods and measuring instruments including Gas spectroscopy (GC)-flame ionization detector (FID) and GC- mass spectroscopy (MS 8. Bark and Ambrosia Beetles Show Different Invasion Patterns in the USA. Directory of Open Access Journals (Sweden) Davide Rassati Full Text Available Non-native bark and ambrosia beetles represent a threat to forests worldwide. Their invasion patterns are, however, still unclear. Here we investigated first, if the spread of non-native bark and ambrosia beetles is a gradual or a discontinuous process; second, which are the main correlates of their community structure; third, whether those correlates correspond to those of native species. We used data on species distribution of non-native and native scolytines in the continental 48 USA states. These data were analyzed through a beta-diversity index, partitioned into species richness differences and species replacement, using Mantel correlograms and non-metric multidimensional scaling (NMDS ordination for identifying spatial patterns, and regression on distance matrices to test the association of climate (temperature, rainfall, forest (cover area, composition, geographical (distance, and human-related (import variables with β-diversity components. For both non-native bark and ambrosia beetles, β-diversity was mainly composed of species richness difference than species replacement. For non-native bark beetles, a discontinuous invasion process composed of long distance jumps or multiple introduction events was apparent. Species richness differences were primarily correlated with differences in import values while temperature was the main correlate of species replacement. For non-native ambrosia beetles, a more continuous invasion process was apparent, with the pool of non-native species arriving in the coastal areas that tended to be filtered as they spread to interior portions of the continental USA. Species richness differences were mainly correlated with differences in rainfall among states, while rainfall and temperature were the main correlates of species replacement. Our study suggests that the different ecology of bark and ambrosia beetles influences their invasion process in new environments. The lower dependency that bark beetles have 9. Occurrence of spruce bark beetles in forest stands at different levels of air pollution stress International Nuclear Information System (INIS) Grodzki, Wojciech; McManus, Michael; Knizek, Milos; Meshkova, Valentina; Mihalciuc, Vasile; Novotny, Julius; Turcani, Marek; Slobodyan, Yaroslav 2004-01-01 The spruce bark beetle, Ips typographus (L.) is the most serious pest of mature spruce stands, mainly Norway spruce, Picea abies (L.) Karst. throughout Eurasia. A complex of weather-related events and other environmental stresses are reported to predispose spruce stands to bark beetle attack and subsequent tree mortality; however the possible role of industrial pollution as a predisposing factor to attack by this species is poorly understood. The abundance and dynamics of I. typographus populations was evaluated in 60-80 year old Norway spruce stands occurring on 10x50 ha sites in five countries within the Carpathian range that were selected in proximity to established ozone measurement sites. Data were recorded on several parameters including the volume of infested trees, captures of adult beetles in pheromone traps, number of attacks, and the presence and relative abundance of associated bark beetle species. In several cases, stands adjacent to sites with higher ozone values were associated with higher bark beetle populations. The volume of sanitary cuttings, a reflection of tree mortality, and the mean daily capture of beetles in pheromone traps were significantly higher at sites where the O 3 level was higher. However, the mean infestation density on trees was higher in plots associated with lower O 3 levels. Captures of beetles in pheromone traps and infestation densities were higher in the zone above 800 m. However, none of the relationships was conclusive, suggesting that spruce bark beetle dynamics are driven by a complex interaction of biotic and abiotic factors and not by a single parameter such as air pollution. - Air pollution (ozone) can be one of predisposing factors that increases the susceptibility of mountain Norway spruce stands to attack by Ips typographus and associated bark beetle species 10. Isolation, Characterization and Anticancer Potential of Cytotoxic Triterpenes from Betula utilis Bark. Directory of Open Access Journals (Sweden) Tripti Mishra Full Text Available Betula utilis, also known as Himalayan silver birch has been used as a traditional medicine for many health ailments like inflammatation, HIV, renal and bladder disorders as well as many cancers from ages. Here, we performed bio-guided fractionation of Betula utilis Bark (BUB, in which it was extracted in methanol and fractionated with hexane, ethyl acetate, chloroform, n-butanol and water. All six fractions were evaluated for their in-vitro anticancer activity in nine different cancer cell lines and ethyl acetate fraction was found to be one of the most potent fractions in terms of inducing cytotoxic activity against various cancer cell lines. By utilizing column chromatography, six triterpenes namely betulin, betulinic acid, lupeol, ursolic acid (UA, oleanolic acid and β-amyrin have been isolated from the ethyl acetate extract of BUB and structures of these compounds were unraveled by spectroscopic methods. β-amyrin and UA were isolated for the first time from Betula utilis. Isolated triterpenes were tested for in-vitro cytotoxic activity against six different cancer cell lines where UA was found to be selective for breast cancer cells over non-tumorigenic breast epithelial cells (MCF 10A. Tumor cell selective apoptotic action of UA was mainly attributed due to the activation of extrinsic apoptosis pathway via up regulation of DR4, DR5 and PARP cleavage in MCF-7 cells over non-tumorigenic MCF-10A cells. Moreover, UA mediated intracellular ROS generation and mitochondrial membrane potential disruption also play a key role for its anti cancer effect. UA also inhibits breast cancer migration. Altogether, we discovered novel source of UA having potent tumor cell specific cytotoxic property, indicating its therapeutic potential against breast cancer. 11. Category-length and category-strength effects using images of scenes. Science.gov (United States) Baumann, Oliver; Vromen, Joyce M G; Boddy, Adam C; Crawshaw, Eloise; Humphreys, Michael S 2018-06-21 Global matching models have provided an important theoretical framework for recognition memory. Key predictions of this class of models are that (1) increasing the number of occurrences in a study list of some items affects the performance on other items (list-strength effect) and that (2) adding new items results in a deterioration of performance on the other items (list-length effect). Experimental confirmation of these predictions has been difficult, and the results have been inconsistent. A review of the existing literature, however, suggests that robust length and strength effects do occur when sufficiently similar hard-to-label items are used. In an effort to investigate this further, we had participants study lists containing one or more members of visual scene categories (bathrooms, beaches, etc.). Experiments 1 and 2 replicated and extended previous findings showing that the study of additional category members decreased accuracy, providing confirmation of the category-length effect. Experiment 3 showed that repeating some category members decreased the accuracy of nonrepeated members, providing evidence for a category-strength effect. Experiment 4 eliminated a potential challenge to these results. Taken together, these findings provide robust support for global matching models of recognition memory. The overall list lengths, the category sizes, and the number of repetitions used demonstrated that scene categories are well-suited to testing the fundamental assumptions of global matching models. These include (A) interference from memories for similar items and contexts, (B) nondestructive interference, and (C) that conjunctive information is made available through a matching operation. 12. Occupational exposure to the personnel regarded by NRB-76/87 to B category Energy Technology Data Exchange (ETDEWEB) Gubatova, D; Balode, G; Nemiro, E [Medical Institute, Riga (USSR) 1990-01-01 During the decades use of radioactive sources in all branches of national economy has increased. Due to this tendency the number of exposed individuals from artificial sources including those exposed only occasionally (the so-called B category) increases. For this category the personnel dosimetry is not obligatory, and radioactive monitoring of their working and living places would do. Nevertheless, the thermoluminescent personnel monitoring data derived by us show that occupational exposure of this category is near, and occasionally even higher than B category exposure limit. So, for example, anesthesiologists annual exposure makes 4.5 mSv, that of surgeons 6.5 mSv, of anesthetic nurses 4.0 mSv (Traumatological and Orthopedic Institute personnel). Annual exposure of 'Medtekhnika' factory electromechanics ought to repair, set up and check up X-ray devices in 1987 was 5 mSv. (author). 13. TO THE QUESTION OF PROFIT CATEGORY Directory of Open Access Journals (Sweden) V. V. Myamlin 2009-08-01 Full Text Available The economic category “profit” is considered. The discrepancies and inconsistency of the existing financial-andeconomic model of management based only upon the “profitable” paradigm is demonstrated. It is shown how a “profit” affects a discrepancy between the supply of goods and the related solvent demand. It is suggested to build the laws of economics starting not from the private interests of separate social groups but from the universal laws of the Nature. The transition from “profit” maximization to wages/salary maximization is recommended. It is proposed to exclude a category “profit” from the financial-and-economic model of management as an unnecessary and imaginary one that continuously leads the economic system to crisis. 14. Energy Data Base: subject categories and scope International Nuclear Information System (INIS) Bost, D.E. 1985-03-01 The subject scope of the Energy Data Base (EDB) encompasses all DOE-sponsored research. Broadly defined, EDB subject scope includes all technological aspects of energy production, conversion, and efficient utilization, and the economic, social, and political aspects as well. Scope notes are provided to define the extent of interest in certain subject areas, particularly areas of basic research. Cross references between categories are provided to aid both the categorization of information and its retrieval. Citations entered into DOE's computerized bibliographic information system are assigned six-digit subject category numbers to broadly group information for storage, retrieval, and manipulation. These numbers are used in the preparation of printed documents, such as bibliographies and abstract journals, to arrange the citations and to aid searching on the DOE/RECON on-line system 15. Subject categories with scope definitions and limitations International Nuclear Information System (INIS) Bost, D.E. 1983-08-01 Citations entered into DOE's computerized bibliographic information system are assigned six-digit subject category numbers to broadly group information for storage, retrieval, and manipulation. These numbers are used in the preparation of printed documents, such as bibliographies and abstract journals, to arrange the citations and to aid searching on the DOE/RECON on-line system. This document has been prepared for use by (1) those individuals responsible for the assignment of category numbers to documents being entered into the Technical Information Center (TIC) system, (2) those individuals and organizations processing magnetic tape copies of the files, (3) those individuals doing on-line searching for information in TIC-created files, and (4) others who, having no access to RECON, need a printed copy 16. Assessment of antidiarrhoeal activity of the methanol extract of Xylocarpus granatum bark in mice model. Science.gov (United States) Rouf, Razina; Uddin, Shaikh Jamal; Shilpi, Jamil Ahmad; Alamgir, Mahiuddin 2007-02-12 The methanol extract of Xylocarpus granatum bark was studied for its antidiarrhoeal properties in experimental diarrhoea, induced by castor oil and magnesium sulphate in mice. At the doses of 250 and 500 mg/kg per oral, the methanol extract showed significant and dose-dependent antidiarrhoeal activity in both models. The extracts also significantly reduced the intestinal transit in charcoal meal test when compared to atropine sulphate (5 mg/kg; i.m.). The results showed that the extracts of Xylocarpus granatum bark have a significant antidiarrhoeal activity and supports its traditional uses in herbal medicine. 17. Effect of Massoia (Massoia aromatica Becc.) Bark on the Phagocytic Activity of Wistar Rat Macrophages OpenAIRE Triana Hertiani; Agustinus Yuswanto; Sylvia Utami Tunjung Pratiwi; Harlyanti Muthma’innah Mashar 2018-01-01 The essential oil of Massoia (Massoia aromatica Becc., Lauraceae) bark is a potential immunomodulator in vitro. This study evaluated the potential immunomodulatory effects of Massoia bark infusion on the nonspecific immune response (phagocytosis) of Wistar rats. For the in vitro assay, macrophages were treated with the freeze-dried infusion at the concentrations of 2.5, 5, 10, 20, or 40 µg/mL media. For the in vivo assay, two-month-old male Wistar rats were divided into five groups. The... 18. An efficient, robust, and inexpensive grinding device for herbal samples like Cinchona bark DEFF Research Database (Denmark) Hansen, Steen Honoré; Holmfred, Else Skovgaard; Cornett, Claus 2015-01-01 An effective, robust, and inexpensive grinding device for the grinding of herb samples like bark and roots was developed by rebuilding a commercially available coffee grinder. The grinder was constructed to be able to provide various particle sizes, to be easy to clean, and to have a minimum...... of dead volume. The recovery of the sample when grinding as little as 50 mg of crude Cinchona bark was about 60%. Grinding is performed in seconds with no rise in temperature, and the grinder is easily disassembled to be cleaned. The influence of the particle size of the obtained powders on the recovery... 19. Selective Solvents for Extraction of Triterpenes from Betula Pendula Outer Bark OpenAIRE Pāže, A; Zandersons, J; Rižikovs, J; Dobele, G; Jurkjāne, V; Spince, B 2013-01-01 The volume of birch plywood production in Latvia is illustrated by the 208 000 m3 of plywood sold in 2011 and about 562 000 m3 of processed birch veneer blocks. Wood residues such as bark, veneer shorts, cut off ends and others are used as a fuel. It would be more expedient to increase the birch wood utilisation degree by involving also birch outer bark in the processing cycle. It makes up 2% of the veneer blocks’ mass. At the J.S.C. “Latvijas Finieris”, about 6000 t per year of graded and mi... 20. European spruce bark beetle (Ips typographus, L.) green attack affects foliar reflectance and biochemical properties Science.gov (United States) Abdullah, Haidi; Darvishzadeh, Roshanak; Skidmore, Andrew K.; Groen, Thomas A.; Heurich, Marco 2018-02-01 The European spruce bark beetle Ips typographus, L. (hereafter bark beetle), causes major economic loss to the forest industry in Europe, especially in Norway Spruce (Picea abies). To minimise economic loss and preclude a mass outbreak, early detection of bark beetle infestation (so-called ;green attack; stage - a period at which trees are yet to show visual signs of infestation stress) is, therefore, a crucial step in the management of Norway spruce stands. It is expected that a bark beetle infestation at the green attack stage affects a tree's physiological and chemical status. However, the concurrent effect on key foliar biochemical such as foliar nitrogen and chlorophyll as well as spectral responses are not well documented in the literature. Therefore, in this study, the early detection of bark beetle green attacks is investigated by examining foliar biochemical and spectral properties (400-2000 nm). We also assessed whether bark beetle infestation affects the estimation accuracy of foliar biochemicals. An extensive field survey was conducted in the Bavarian Forest National Park (BFNP), Germany, in the early summer of 2015 to collect leaf samples from 120 healthy and green attacked trees. The spectra of the leaf samples were measured using an ASD FieldSpec3 equipped with an integrating sphere. Significant differences (p < 0.05) between healthy and infested needle samples were found in the mean reflectance spectra, with the most pronounced differences being observed in the NIR and SWIR regions between 730 and 1370 nm. Furthermore, significant differences (p < 0.05) were found in the biochemical compositions (chlorophyll and nitrogen concentration) of healthy versus green attacked samples. Our results further demonstrate that the estimation accuracy of foliar chlorophyll and nitrogen concentrations, utilising partial least square regression model, was lower for the infested compared to the healthy trees. We show that early stage of infestation reduces not only 1. The Relation between Hepatotoxicity and the Total Coumarin Intake from Traditional Japanese Medicines Containing Cinnamon Bark. Science.gov (United States) Iwata, Naohiro; Kainuma, Mosaburo; Kobayashi, Daisuke; Kubota, Toshio; Sugawara, Naoko; Uchida, Aiko; Ozono, Sahoko; Yamamuro, Yuki; Furusyo, Norihiro; Ueda, Koso; Tahara, Eiichi; Shimazoe, Takao 2016-01-01 Cinnamon bark is commonly used in traditional Japanese herbal medicines (Kampo medicines). The coumarin contained in cinnamon is known to be hepatotoxic, and a tolerable daily intake (TDI) of 0.1 mg/kg/day, has been quantified and used in Europe to insure safety. Risk assessments for hepatotoxicity by the cinnamon contained in foods have been reported. However, no such assessment of cinnamon bark has been reported and the coumarin content of Kampo medicines derived from cinnamon bark is not yet known. To assess the risk for hepatotoxicity by Kampo medicines, we evaluated the daily coumarin intake of patients who were prescribed Kampo medicines and investigated the relation between hepatotoxicity and the coumarin intake. The clinical data of 129 outpatients (18 male and 111 female, median age 58 years) who had been prescribed keishibukuryogankayokuinin (TJ-125) between April 2008 and March 2013 was retrospectively investigated. Concurrent Kampo medicines and liver function were also surveyed. In addition to TJ-125, the patients took some of the other 32 Kampo preparations and 22 decoctions that include cinnamon bark. The coumarin content of these Kampo medicines was determined by high performance liquid chromatography (HPLC). TJ-125 had the highest daily content of coumarin (5.63 mg/day), calculated from the daily cinnamon bark dosage reported in the information leaflet inserted in each package of Kampo medicine. The coumarin content in 1g cinnamon bark decoction was 3.0 mg. The daily coumarin intake of the patients was 0.113 (0.049-0.541) mg/kg/day, with 98 patients (76.0%) exceeding the TDI. Twenty-three patients had an abnormal change in liver function test value, but no significant difference was found in the incidence of abnormal change between the group consuming less than the TDI value (6/31, 19.4%) and the group consuming equal to or greater than the TDI value (17/98, 17.3%). In addition, no abnormal change related to cinnamon bark was found for individual 2. HOPEAPHENOL-O-GLYCOSIDE, A COMPOUND ISOLATED FROM STEM BARK Anisoptera marginata (Dipterocarpaceae Directory of Open Access Journals (Sweden) Sri Atun 2010-06-01 Full Text Available Isolation and structure elucidation of some compounds from stem bark of Anisoptera marginata had been done. The isolation of those compounds was carried out by chromatographyc method and structure elucidation was performed by interpretation of spectroscopic data, including UV, IR, 1H and 13C NMR 1D and 2D, and FABMS. From acetone extract stem bark A. marginata we isolated five known compounds namely bergenin (1, (--ε-vinipherin (2, (--ampelopsin A (3, vaticanol B (4, (--hopeaphenol (5, and a glycoside compound namely hopeaphenol-O- glycoside (6. Keywords: Dipterocarpaceae; Anisoptera marginata; hopeaphenol-O-glucoside 3. Pine bark as bio-adsorbent for Cd, Cu, Ni, Pb and Zn DEFF Research Database (Denmark) Cutillas-Barreiro, L.; Ansias-Manso, L.; Fernandez Calviño, David 2014-01-01 to the added concentrations, with Pb always showing the lowest levels. Stirred flow chamber experiments showed strong hysteresis for Pb and Cu, sorption being mostly irreversible. The differences affecting the studied heavy metals are mainly due to different affinity for the adsorption sites. Pine bark can......The objective of this work was to determine the retention of five metals on pine bark using stirred flow and batch-type experiments. Resulting from batch-type kinetic experiments, adsorption was rapid, with no significant differences for the various contact times. Adsorption was between 98 and 99... 4. Health effects of risk-assessment categories International Nuclear Information System (INIS) Kramer, C.F.; Rybicka, K.; Knutson, A.; Morris, S.C. 1983-10-01 Environmental and occupational health effects associated with exposures to various chemicals are a subject of increasing concern. One recently developed methodology for assessing the health impacts of various chemical compounds involves the classification of similar chemicals into risk-assessment categories (RACs). This report reviews documented human health effects for a broad range of pollutants, classified by RACs. It complements other studies that have estimated human health effects by RAC based on analysis and extrapolation of data from animal research 5. Sacrality and worldmaking: new categorial perspectives OpenAIRE William E. Paden 1999-01-01 The category of the sacred in particular and the role of transcultural concept-formation in general have undergone an obvious crisis. For the most part, "the sacred," if not an empty label, has been linked with theologism, and transcultural concepts have been condemned for their general non-comparability and colonialist intent. The author approaches the matter of transcultural templates through an analysis of certain concepts of sacrality. With some exceptions, the discourse of sacrality has ... 6. 40 CFR 156.62 - Toxicity Category. Science.gov (United States) 2010-07-01 .... Acute Toxicity Categories for Pesticide Products Hazard Indicators I II III IV Oral LD50 Up to and including 50 mg/kg >50 thru 500 mg/kg >500 thru 5,000 mg/kg >5,000 mg/kg Dermal LD50 Up to and including 200 mg/kg >200 thru 2000 mg/kg >2000 thru 20,000 mg/kg >20,000 mg/kg Inhalation LC50 Up to and including... 7. Health effects of risk-assessment categories Energy Technology Data Exchange (ETDEWEB) Kramer, C.F.; Rybicka, K.; Knutson, A.; Morris, S.C. 1983-10-01 Environmental and occupational health effects associated with exposures to various chemicals are a subject of increasing concern. One recently developed methodology for assessing the health impacts of various chemical compounds involves the classification of similar chemicals into risk-assessment categories (RACs). This report reviews documented human health effects for a broad range of pollutants, classified by RACs. It complements other studies that have estimated human health effects by RAC based on analysis and extrapolation of data from animal research. 8. Metacognitive control of categorial neurobehavioral decision systems Directory of Open Access Journals (Sweden) Gordon Robert Foxall 2016-02-01 Full Text Available The competing neuro-behavioral decision systems (CNDS model proposes that the degree to which an individual discounts the future is a function of the relative hyperactivity of an impulsive system based on the limbic and paralimbic brain regions and the relative hypoactivity of an executive system based in prefrontal cortex (PFC. The model depicts the relationship between these categorial systems in terms of the antipodal neurophysiological, behavioral, and decision (cognitive functions that engender classes normal and addictive responding. However, a case may be made for construing several components of the impulsive and executive systems depicted in the model as categories (elements of additional systems that are concerned with the metacognitive control of behavior. Hence, this paper proposes a category-based structure for understanding the effects on behavior of CNDS, which includes not only the impulsive and executive systems of the basic model but, a superordinate level of reflective or rational decision-making. Following recent developments in the modeling of cognitive control which contrasts Type 1 (rapid, autonomous, parallel processing with Type 2 (slower, computationally-demanding, sequential processing, the proposed model incorporates an arena in which the potentially conflicting imperatives of impulsive and executive systems are examined and from which a more appropriate behavioral response than impulsive choice emerges. This configuration suggests a forum in which the interaction of picoeconomic interests, which provide a cognitive dimension for CNDS, can be conceptualized. This proposition is examined in light of the resolution of conflict by means of bundling. 9. Predisposition to bark beetle attack by root herbivores and associated pathogens: Roles in forest decline, gap formation, and persistence of endemic bark beetle populations DEFF Research Database (Denmark) Aukema, Brian H.; Zhu, Jun; Møller, Jesper 2010-01-01 , however, due to the requirement of long-term monitoring and high degrees of spatial and temporal covariance. We censused more than 2700 trees annually over 7 years, and at the end of 17 years, in a mature red pine plantation. Trees were measured for the presence of bark beetles and wood borers that breed...... within the primary stem, root weevils that breed in root collars, and bark beetles that breed in basal stems. We quantify the sequence of events that drive this decline syndrome, with the primary emergent pattern being an interaction between below- and above-ground herbivores and their fungal symbionts......, and elevated temperature slightly accentuates this effect. New gaps can arise from such trees as they subsequently become epicenters for the full complex of organisms associated with this decline, but this is not common. As Ips populations rise, there is some element of positive feedback... 10. Semantic category interference in overt picture naming: sharpening current density localization by PCA. Science.gov (United States) Maess, Burkhard; Friederici, Angela D; Damian, Markus; Meyer, Antje S; Levelt, Willem J M 2002-04-01 The study investigated the neuronal basis of the retrieval of words from the mental lexicon. The semantic category interference effect was used to locate lexical retrieval processes in time and space. This effect reflects the finding that, for overt naming, volunteers are slower when naming pictures out of a sequence of items from the same semantic category than from different categories. Participants named pictures blockwise either in the context of same- or mixed-category items while the brain response was registered using magnetoencephalography (MEG). Fifteen out of 20 participants showed longer response latencies in the same-category compared to the mixed-category condition. Event-related MEG signals for the participants demonstrating the interference effect were submitted to a current source density (CSD) analysis. As a new approach, a principal component analysis was applied to decompose the grand average CSD distribution into spatial subcomponents (factors). The spatial factor indicating left temporal activity revealed significantly different activation for the same-category compared to the mixed-category condition in the time window between 150 and 225 msec post picture onset. These findings indicate a major involvement of the left temporal cortex in the semantic interference effect. As this effect has been shown to take place at the level of lexical selection, the data suggest that the left temporal cortex supports processes of lexical retrieval during production. 11. Radiation protection in category III large gamma irradiators; Radioprotecao em irradiadores de grande porte de categoria III Energy Technology Data Exchange (ETDEWEB) Costa, Neivaldo; Furlan, Gilberto Ribeiro, E-mail: [email protected], E-mail: [email protected] [Centro de Energia Nuclear na Agricultura (CENA/USP), Piracicaba, SP (Brazil); Itepan, Natanael Marcio, E-mail: [email protected] [Universidade Anhanguera, Goiania, GO (Brazil) 2011-07-01 This article discusses the advantages of category III large gamma irradiator compared to the others, with emphasis on aspects of radiological protection, in the industrial sector. This category is a kind of irradiators almost unknown to the regulators authorities and the industrial community, despite its simple construction and greater radiation safety intrinsic to the model, able to maintain an efficiency of productivity comparable to those of category IV. Worldwide, there are installed more than 200 category IV irradiators and there is none of a category III irradiator in operation. In a category III gamma irradiator, the source remains fixed in the bottom of the tank, always shielded by water, negating the exposition risk. Taking into account the benefits in relation to radiation safety, the category III large irradiators are highly recommended for industrial, commercial purposes or scientific research. (author) 12. Setters and samoyeds: the emergence of subordinate level categories as a basis for inductive inference in preschool-age children. Science.gov (United States) Waxman, S R; Lynch, E B; Casey, K L; Baer, L 1997-11-01 Basic level categories are a rich source of inductive inference for children and adults. These 3 experiments examine how preschool-age children partition their inductively rich basic level categories to form subordinate level categories and whether these have inductive potential. Children were taught a novel property about an individual member of a familiar basic level category (e.g., a collie). Then, children's extensions of that property to other objects from the same subordinate (e.g., other collies), basic (e.g., other dogs), and superordinate (e.g., other animals) level categories were examined. The results suggest (a) that contrastive information promotes the emergence of subordinate categories as a basis of inductive inference and (b) that newly established subordinate categories can retain their inductive potential in subsequent reasoning over a week's time. 13. Low temperature corrosion in bark fuelled, small boilers Energy Technology Data Exchange (ETDEWEB) Lindau, Leif; Goldschmidt, Barbara 2008-05-15 A number of small (3-12 MW), new biofuel boiler plants in southern Sweden, and (at least) in Austria, have suffered a high (wastage of mm/yrs) corrosion rate on the low temperature boiler side. This problem has been investigated with respect to its occurrence and its character by contacts with operators, by plant inspections, and by analysis of cold-side deposits. The plants affected have low feed water temperatures (< 100 deg C). The plants fire most types of Swedish biofuel: chips, bark, hog fuel, and 'GROT' (=twigs and tops). The results found give basis for a hypothesis that the corrosion results from the presence of an aqueous phase in the deposits, this phase being stabilized by dissolved salts having high solubility. It then follows that for each salt, there is a critical relative humidity (calculated from the flue gas water partial pressure and the cooling surface temperature as is common practice among boiler engineers) for both the presence of the aqueous phase and the corrosion. Some critical single salts, ZnCl{sub 2} and CaCl{sub 2} have been identified, and they give critical 'relative humidities' of 5% and 18% respectively. These figures are a lower bound. The corresponding figure, derived from the practical experience and the reported plant operational data, is between 20 and 30%. Corrosion tests have been carried out by exposing an air-cooled probe in the flue gases at a 12 MW boiler at Saevelundsverket in Alingsaas, and the material wastage at different temperatures has been measured with a profilometer. The high corrosion rates were reproduced in the tests for high relative humidities. The corrosion rate was small and not measurable (<0.1 mm/year) for relative humidity <22%. The work shows by means of indirect evidence that the corrosion critical components are ZnCl{sub 2} and possibly CaCl{sub 2} as well. The practical engineering design criterion derived from the work is that the relative humidity (calculated from the flue 14. Effective pine bark composting with the Dome Aeration Technology International Nuclear Information System (INIS) Trois, Cristina; Polster, Andreas 2007-01-01 In South Africa garden refuse is primarily disposed of in domestic landfills. Due to the large quantities generated, any form of treatment would be beneficial for volume reduction, waste stabilization and resource recovery. Dome Aeration Technology (DAT) is an advanced process for aerobic biological degradation of garden refuse and general waste [Paar, S., Brummack, J., Gemende, B., 1999a. Advantages of dome aeration in mechanical-biological waste treatment. In: Proceedings of the 7th International Waste Management and Landfill Symposium, Cagliari, 4-8 October 1999; Paar, S., Brummack, J., Gemende, B., 1999b. Mechanical-biological waste stabilization by the dome aeration method. Environment Protection Engineering 25 (3/99). Mollekopf, N., Brummack, J., Paar, S., Vorster, K., 2002. Use of the Dome Aeration Technology for biochemical stabilization of waste prior to landfilling. In: Proceedings of the Wastecon 2002, Waste Congress and Exhibition, Durban, South Africa.]. It is a non-reactor open windrow composting process, with the main advantage being that the input material needs no periodic turning. A rotting time of only 3-4 months indicates the high efficiency. Additionally, the low capital/operational costs, low energy inputs and limited plant requirements provide potential for use in aerobic refuse stabilization. The innovation in the DAT process is the passive aeration achieved by thermally driven advection through open windrows caused by temperature differences between the degrading material and the outside environment. This paper investigates the application of Dome Aeration Technology to pine bark composting as part of an integrated waste management strategy. A full-scale field experiment was performed at the Bisasar Road Landfill Site in Durban to assess the influence of climate, waste composition and operational conditions on the process. A test windrow was constructed and measurements of temperature and airflow through the material were taken. The process 15. From groups to categorial algebra introduction to protomodular and mal’tsev categories CERN Document Server Bourn, Dominique 2017-01-01 This book gives a thorough and entirely self-contained, in-depth introduction to a specific approach to group theory, in a large sense of that word. The focus lie on the relationships which a group may have with other groups, via “universal properties”, a view on that group “from the outside”. This method of categorical algebra, is actually not limited to the study of groups alone, but applies equally well to other similar categories of algebraic objects. By introducing protomodular categories and Mal’tsev categories, which form a larger class, the structural properties of the category Gp of groups, show how they emerge from four very basic observations about the algebraic litteral calculus and how, studied for themselves at the conceptual categorical level, they lead to the main striking features of the category Gp of groups. Hardly any previous knowledge of category theory is assumed, and just a little experience with standard algebraic structures such as groups and monoids. Examples and exercises... 16. Anthropogenic radionuclides and heavy metals in black poplar tree (Populus nigra l.) bark sampled in one of the residential districts of Kyiv International Nuclear Information System (INIS) Berlizov, A.N.; Malyuk, I.A.; Sazhenyuk, A.D.; Tryshyn, V.V. 2006-01-01 Tree bark is known to be a good alternative biological substrate that can be successfully used in the air pollution monitoring studies, especially in urban and industrialized areas suffering from the severe anthropogenic pressure. In Kyiv black poplar is a widespread tree species, whose bark was used as a biological indicator in our research. The bark samples were collected within one of the residential districts of Kyiv and were subject to comprehensive analysis for the content of stable elements and anthropogenic radionuclides. Thermal and epicadmium NAA in short- and long-term irradiation modes, respectively, were used for the determination of concentrations of up to 40 heavy metals, while gamma spectrometry, alpha spectrometry and radiochemical extraction-ion-exchange techniques were applied to determine 137 Cs, 90 Sr, Pu and Am radioactive isotopes in single bark samples. The analytical data obtained were subject to correlation and factor analysis, which revealed basic air pollution sources in the investigated region. It was shown that no significant correlations exist between radionuclides and any determined stable elements in the analyzed samples. All measured radioactive isotopes turned out to fall into a separate factor, which is believed to present the direct deposition of fuel microparticles from the Chernobyl NPP's Unit 4 from the atmosphere into the substratum during radioactive fallouts in spring 1986. This conclusion was supported by the evaluated isotopic ratios 137 Cs/ 90 Sr = 1.1 ± 0.4, 137 Cs/ 239+240 Pu = 100 ± 40, 239+240 Pu/ 238 Pu = 1.0 ± 0.6, as well as by the observed significant variation of the radionuclide concentrations (e.g. 10 Bq/kg - 1540 Bq/kg for 137 Cs, 0.1 Bq/kg - 21 Bq/kg for 238,240 Pu), which is believed to reflect a microparticle character of the pollution. The obtained data suggest that re-suspension does not play a significant role in the formation of atmospheric air pollution by radioactive substances in the 17. Order of Presentation Effects in Learning Color Categories Science.gov (United States) Sandhofer, Catherine M.; Doumas, Leonidas A. A. 2008-01-01 Two studies, an experimental category learning task and a computational simulation, examined how sequencing training instances to maximize comparison and memory affects category learning. In Study 1, 2-year-old children learned color categories with three training conditions that varied in how categories were distributed throughout training and… 18. Supervised and Unsupervised Learning of Multidimensional Acoustic Categories Science.gov (United States) Goudbeek, Martijn; Swingley, Daniel; Smits, Roel 2009-01-01 Learning to recognize the contrasts of a language-specific phonemic repertoire can be viewed as forming categories in a multidimensional psychophysical space. Research on the learning of distributionally defined visual categories has shown that categories defined over 1 dimension are easy to learn and that learning multidimensional categories is… 19. Antinociceptive effect of the ethanol extract of the stem bark of ... African Journals Online (AJOL) Musanga cecropioides R. Apud Tedlie (Cecropiaceae), also known as umbrella tree is one of the medicinal plants used in Nigeria for pain and inflammation. The stem bark was extracted with absolute ethanol and screened for analgesic activities. The screening for analgesic properties was done using: acetic acid induced ... 20. Host-tree monoterpenes and biosynthesis of aggregation pheromones in the bark beetle ips paraconfusus Science.gov (United States) In the 1970-80s, vapors of the common conifer tree monoterpenes, myrcene and a-pinene, were shown to serve as precursors of ipsenol, ipsdienol and cis-verbenol, aggregation pheromone components of Ips paraconfusus. A paradigm developed that Ips bark beetles utilize pre-formed monoterpene precursors ... 1. Presence of carbaryl in the smoke of treated lodgepole and ponderosa pine bark Science.gov (United States) Chris J. Peterson; Sheryl L. Costello 2013-01-01 Lodgepole and ponderosa pine trees were treated with a 2% carbaryl solution at recreational areas near Fort Collins, CO, in June 2010 as a prophylactic bole spray against the mountain pine beetle. Bark samples from treated and untreated trees were collected one day following application and at 4-month intervals for one year. The residual amount of carbaryl was... 2. Analgesic activity of crude aqueous extract of the root bark of ... African Journals Online (AJOL) Objective: The analgesic activity of crude aqueous extract of the root bark of Zanthoxylum xanthozyloides was studied in mice and rats with the view to verifying the claim in folklore medicine that the extract has analgesic activity. Method: The extract was obtained by Soxhlet extraction and rotatory evaporation, followed by ... 3. Dataset on analysis of dyeing property of natural dye from Thespesia populnea bark on different fabrics Directory of Open Access Journals (Sweden) Kuchekar Mohini 2018-02-01 Full Text Available The natural dyes separated from plants are of gaining interest as substitutes for synthetic dyes in food and cosmetics. Thespesia populnea (T. populnea is widely grown plant and used in the treatment of various diseases. This study was aimed to separate natural dye from T. populnea bark and analysis of its dyeing property on different fabrics. In this investigation pharmacognostic study was carried out. The pharmacognostic study includes morphological study, microscopical examination, proximate analysis along with the phytochemical study. The dyeing of different fabric was done with a natural dye extracted from T. populnea bark. The fabrics like cotton, butter crep, polymer, chiken, lone, ulene and tarakasa were dye with plant extract. The various evaluation parameters were studied. It includes effect of washing with water, effect of soap, effect of sunlight, effect of alum, effect of Cupric sulphate, microscopical study of fabrics and visual analysis of dyeing by common people were studied. In results, natural dye isolated from T. populnea bark could be used for dyeing fabrics with good fastness properties. The studies reveals that, the dyeing property of fabrics after washing with water and soap, exposed to sunlight does not get affected. It was observed that cotton and tarakasa stains better as compared with other fabrics. It was concluded that the ethanolic extract having good dyeing property. Keywords: Plant, Thespesia populnea, Bark, Natural dye, Fabrics 4. Acceptance and suitability of novel trees for Orthotomicus erosus, an exotic bark beetle in North America Science.gov (United States) A.J. Walter; R.C. Venette; S.A. Kells 2010-01-01 To predict whether an herbivorous pest insect will establish in a new area, the potential host plants must be known. For invading bark beetles, adults must recognize and accept trees suitable for larval development. The preference-performance hypothesis predicts that adults will select host species that maximize the fitness of their offspring. We tested five species of... 5. Attraction of ambrosia and bark beetles to coast live oaks infected by Phytophthora ramorum Science.gov (United States) Brice A. McPherson; Nadir Erbilgin; David L. Wood; Pavel Svihra; Andrew J. Storer; Richard B. Standiford 2008-01-01 Sudden oak death, caused by Phytophthora ramorum (Werres, de Cock & Man in?t Veld), has killed thousands of oaks (Quercus spp.) in coastal California forests since the mid-1990s. Bark and ambrosia beetles that normally colonize dead or severely weakened trees selectively tunnel into the bleeding cankers that are the first... 6. Electrophysiological and olfactometer responses of two histerid predators to three pine bark beetle pheromones Science.gov (United States) William P. Shepherd; Brian T. Sullivan; Richard A. Goyer; Kier D. Klepzig 2005-01-01 We measured electrophysiological responses in the antennae of two predaceous hister beetles, Platysoma parallelum and Plegaderus transversus, exposes to racemic mixtures of primary aggregation pheromones of scolytid bark beetle prey, ipsenol, ipsdienol, and frontalin. No significant differences were found for either histerid... 7. Temperature determines symbiont abundance in a multipartite bark beetle-fungus ectosymbiosis Science.gov (United States) D. L. Six; B. J. Bentz 2007-01-01 In this study, we report evidence that temperature plays a key role in determining the relative abundance of two mutualistic fungi associated with an economically and ecologically important bark beetle, Dendroctonus ponderosae. The symbiotic fungi possess different optimal temperature ranges. These differences determine which fungus is vectored by... 8. Effects of a Commercial Chitosan Formulation on Bark Beetle (Coleoptera: Curculionidae) Resistance Parameters in Loblolly Pine Science.gov (United States) K. D. Klepzig; B. L. Strom 2011-01-01 A commercially available chitosan product, Beyond™, was evaluated for its effects on loblolly pine, Pinus taeda L., responses believed related to bark beetle resistance. Treatments were applied 4 times at approx. 6-wk intervals between May and November 2008. Five treatments were evaluated: ground application (soil drench), foliar application, ground... 9. Association of Geosmithia fungi (Ascomycota: Hypocreales) with pine- and spruce-infesting bark beetles in Poland Czech Academy of Sciences Publication Activity Database Jankowiak, R.; Kolařík, Miroslav; Bilanski,, P. 2014-01-01 Roč. 11, OCT 2014 (2014), s. 71-79 ISSN 1754-5048 R&D Projects: GA ČR(CZ) GAP506/11/2302 Institutional support: RVO:61388971 Keywords : Insect-fungus interactions * Bark beetles * Ectosymbiosis Subject RIV: EE - Microbiology, Virology Impact factor: 2.929, year: 2014 10. Atmospheric pollution in an urban environment by tree bark biomonitoring--part I: trace element analysis. Science.gov (United States) Guéguen, Florence; Stille, Peter; Lahd Geagea, Majdi; Boutin, René 2012-03-01 Tree bark has been shown to be a useful biomonitor of past air quality because it accumulates atmospheric particulate matter (PM) in its outermost structure. Trace element concentrations of tree bark of more than 73 trees allow to elucidate the impact of past atmospheric pollution on the urban environment of the cities of Strasbourg and Kehl in the Rhine Valley. Compared to the upper continental crust (UCC) tree barks are strongly enriched in Mn, Ni, Cu, Zn, Cd and Pb. To assess the degree of pollution of the different sites in the cities, a geoaccumulation index I(geo) was applied. Global pollution by V, Ni, Cr, Sb, Sn and Pb was observed in barks sampled close to traffic axes. Cr, Mo, Cd pollution principally occurred in the industrial area. A total geoaccumulation index I(GEO-tot) was defined; it is based on the total of the investigated elements and allows to evaluate the global pollution of the studied environment by assembling the I(geo) indices on a pollution map. Copyright © 2011 Elsevier Ltd. All rights reserved. 11. Phytochemical study from root barks of Zanthoxylum rigidum Humb. and Bonpl. ex Willd (Rutaceae) International Nuclear Information System (INIS) Moccelini, Sally Katiuce; Silva, Virginia Claudia da; Ndiaye, Eliane Augusto; Sousa Junior, Paulo Teixeira de; Vieira, Paulo Cezar 2009-01-01 Chemical investigation from root barks of Z. rigidum, resulted in the isolation of lupeol, a mixture of steroids campesterol, sitosterol, stigmasterol, sacarose, hesperidin, N-methylatanine and 6-acetonyldihydrochelerythrine. Their structures were established by spectral data analysis. No previous work has been reported on Z. rigidum species. (author) 12. 78 FR 12788 - Certain Electronic Bark Control Collars; Notice of Institution of Investigation; Institution of... Science.gov (United States) 2013-02-25 ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-870] Certain Electronic Bark Control... AGENCY: U.S. International Trade Commission. ACTION: Notice. SUMMARY: Notice is hereby given that a complaint and a motion for temporary relief were filed with the U.S. International Trade Commission on... 13. First observation of the decayB_s^0 \\rightarrow \\phi \\bar{K}^{*0}$CERN Document Server Aaij, R; Adeva, B; Adinolfi, M; Adrover, C; Affolder, A; Ajaltouni, Z; Albrecht, J; Alessio, F; Alexander, M; Ali, S; Alkhazov, G; Alvarez Cartelle, P; Alves Jr, A A; Amato, S; Amerio, S; Amhis, Y; Anderlini, L; Anderson, J; Andreassen, R; Appleby, R B; Aquines Gutierrez, O; Archilli, F; Artamonov, A; Artuso, M; Aslanides, E; Auriemma, G; Bachmann, S; Back, J J; Baesso, C; Balagura, V; Baldini, W; Barlow, R J; Barschel, C; Barsuk, S; Barter, W; Bauer, Th; Bay, A; Beddow, J; Bedeschi, F; Bediaga, I; Belogurov, S; Belous, K; Belyaev, I; Ben-Haim, E; Benayoun, M; Bencivenni, G; Benson, S; Benton, J; Berezhnoy, A; Bernet, R; Bettler, M -O; van Beuzekom, M; Bien, A; Bifani, S; Bird, T; Bizzeti, A; Bjørnstad, P M; Blake, T; Blanc, F; Blouw, J; Blusk, S; Bocci, V; Bondar, A; Bondar, N; Bonivento, W; Borghi, S; Borgia, A; Bowcock, T J V; Bowen, E; Bozzi, C; Brambach, T; van den Brand, J; Bressieux, J; Brett, D; Britsch, M; Britton, T; Brook, N H; Brown, H; Burducea, I; Bursche, A; Busetto, G; Buytaert, J; Cadeddu, S; Callot, O; Calvi, M; Calvo Gomez, M; Camboni, A; Campana, P; Campora Perez, D; Carbone, A; Carboni, G; Cardinale, R; Cardini, A; Carranza-Mejia, H; Carson, L; Carvalho Akiba, K; Casse, G; Castillo Garcia, L; Cattaneo, M; Cauet, Ch; Charles, M; Charpentier, Ph; Chen, P; Chiapolini, N; Chrzaszcz, M; Ciba, K; Cid Vidal, X; Ciezarek, G; Clarke, P E L; Clemencic, M; Cliff, H V; Closier, J; Coca, C; Coco, V; Cogan, J; Cogneras, E; Collins, P; Comerma-Montells, A; Contu, A; Cook, A; Coombes, M; Coquereau, S; Corti, G; Couturier, B; Cowan, G A; Craik, D C; Cunliffe, S; Currie, R; D'Ambrosio, C; David, P; David, P N Y; Davis, A; De Bonis, I; De Bruyn, K; De Capua, S; De Cian, M; De Miranda, J M; De Paula, L; De Silva, W; De Simone, P; Decamp, D; Deckenhoff, M; Del Buono, L; Derkach, D; Deschamps, O; Dettori, F; Di Canto, A; Dijkstra, H; Dogaru, M; Donleavy, S; Dordei, F; Dosil Suárez, A; Dossett, D; Dovbnya, A; Dupertuis, F; Dzhelyadin, R; Dziurda, A; Dzyuba, A; Easo, S; Egede, U; Egorychev, V; Eidelman, S; van Eijk, D; Eisenhardt, S; Eitschberger, U; Ekelhof, R; Eklund, L; El Rifai, I; Elsasser, Ch; Elsby, D; Falabella, A; Färber, C; Fardell, G; Farinelli, C; Farry, S; Fave, V; Ferguson, D; Fernandez Albor, V; Ferreira Rodrigues, F; Ferro-Luzzi, M; Filippov, S; Fiore, M; Fitzpatrick, C; Fontana, M; Fontanelli, F; Forty, R; Francisco, O; Frank, M; Frei, C; Frosini, M; Furcas, S; Furfaro, E; Gallas Torreira, A; Galli, D; Gandelman, M; Gandini, P; Gao, Y; Garofoli, J; Garosi, P; Garra Tico, J; Garrido, L; Gaspar, C; Gauld, R; Gersabeck, E; Gersabeck, M; Gershon, T; Ghez, Ph; Gibson, V; Gligorov, V V; Göbel, C; Golubkov, D; Golutvin, A; Gomes, A; Gordon, H; Grabalosa Gándara, M; Graciani Diaz, R; Granado Cardoso, L A; Graugés, E; Graziani, G; Grecu, A; Greening, E; Gregson, S; Grünberg, O; Gui, B; Gushchin, E; Guz, Yu; Gys, T; Hadjivasiliou, C; Haefeli, G; Haen, C; Haines, S C; Hall, S; Hampson, T; Hansmann-Menzemer, S; Harnew, N; Harnew, S T; Harrison, J; Hartmann, T; He, J; Heijne, V; Hennessy, K; Henrard, P; Hernando Morata, J A; van Herwijnen, E; Hicheur, A; Hicks, E; Hill, D; Hoballah, M; Holtrop, M; Hombach, C; Hopchev, P; Hulsbergen, W; Hunt, P; Huse, T; Hussain, N; Hutchcroft, D; Hynds, D; Iakovenko, V; Idzik, M; Ilten, P; Jacobsson, R; Jaeger, A; Jans, E; Jaton, P; Jing, F; John, M; Johnson, D; Jones, C R; Joram, C; Jost, B; Kaballo, M; Kandybei, S; Karacson, M; Karbach, T M; Kenyon, I R; Kerzel, U; Ketel, T; Keune, A; Khanji, B; Kochebina, O; Komarov, I; Koopman, R F; Koppenburg, P; Korolev, M; Kozlinskiy, A; Kravchuk, L; Kreplin, K; Kreps, M; Krocker, G; Krokovny, P; Kruse, F; Kucharczyk, M; Kudryavtsev, V; Kvaratskheliya, T; La Thi, V N; Lacarrere, D; Lafferty, G; Lai, A; Lambert, D; Lambert, R W; Lanciotti, E; Lanfranchi, G; Langenbruch, C; Latham, T; Lazzeroni, C; Le Gac, R; van Leerdam, J; Lees, J -P; Lefèvre, R; Leflat, A; Lefrançois, J; Leo, S; Leroy, O; Lesiak, T; Leverington, B; Li, Y; Li Gioi, L; Liles, M; Lindner, R; Linn, C; Liu, B; Liu, G; Lohn, S; Longstaff, I; Lopes, J H; Lopez Asamar, E; Lopez-March, N; Lu, H; Lucchesi, D; Luisier, J; Luo, H; Machefert, F; Machikhiliyan, I V; Maciuc, F; Maev, O; Malde, S; Manca, G; Mancinelli, G; Marconi, U; Märki, R; Marks, J; Martellotti, G; Martens, A; Martín Sánchez, A; Martinelli, M; Martinez Santos, D; Martins Tostes, D; Massafferri, A; Matev, R; Mathe, Z; Matteuzzi, C; Maurice, E; Mazurov, A; McCarthy, J; McNab, A; McNulty, R; Meadows, B; Meier, F; Meissner, M; Merk, M; Milanes, D A; Minard, M -N; Molina Rodriguez, J; Monteil, S; Moran, D; Morawski, P; Morello, M J; Mountain, R; Mous, I; Muheim, F; Müller, K; Muresan, R; Muryn, B; Muster, B; Naik, P; Nakada, T; Nandakumar, R; Nasteva, I; Needham, M; Neufeld, N; Nguyen, A D; Nguyen, T D; Nguyen-Mau, C; Nicol, M; Niess, V; Niet, R; Nikitin, N; Nikodem, T; Nomerotski, A; Novoselov, A; Oblakowska-Mucha, A; Obraztsov, V; Oggero, S; Ogilvy, S; Okhrimenko, O; Oldeman, R; Orlandea, M; Otalora Goicochea, J M; Owen, P; Oyanguren, A; Pal, B K; Palano, A; Palutan, M; Panman, J; Papanestis, A; Pappagallo, M; Parkes, C; Parkinson, C J; Passaleva, G; Patel, G D; Patel, M; Patrick, G N; Patrignani, C; Pavel-Nicorescu, C; Pazos Alvarez, A; Pellegrino, A; Penso, G; Pepe Altarelli, M; Perazzini, S; Perego, D L; Perez Trigo, E; Pérez-Calero Yzquierdo, A; Perret, P; Perrin-Terrin, M; Pessina, G; Petridis, K; Petrolini, A; Phan, A; Picatoste Olloqui, E; Pietrzyk, B; Pilař, T; Pinci, D; Playfer, S; Plo Casasus, M; Polci, F; Polok, G; Poluektov, A; Polycarpo, E; Popov, D; Popovici, B; Potterat, C; Powell, A; Prisciandaro, J; Pritchard, A; Prouve, C; Pugatch, V; Puig Navarro, A; Punzi, G; Qian, W; Rademacker, J H; Rakotomiaramanana, B; Rangel, M S; Raniuk, I; Rauschmayr, N; Raven, G; Redford, S; Reid, M M; dos Reis, A C; Ricciardi, S; Richards, A; Rinnert, K; Rives Molina, V; Roa Romero, D A; Robbe, P; Rodrigues, E; Rodriguez Perez, P; Roiser, S; Romanovsky, V; Romero Vidal, A; Rouvinet, J; Ruf, T; Ruffini, F; Ruiz, H; Ruiz Valls, P; Sabatino, G; Saborido Silva, J J; Sagidova, N; Sail, P; Saitta, B; Salzmann, C; Sanmartin Sedes, B; Sannino, M; Santacesaria, R; Santamarina Rios, C; Santovetti, E; Sapunov, M; Sarti, A; Satriano, C; Satta, A; Savrie, M; Savrina, D; Schaack, P; Schiller, M; Schindler, H; Schlupp, M; Schmelling, M; Schmidt, B; Schneider, O; Schopper, A; Schune, M -H; Schwemmer, R; Sciascia, B; Sciubba, A; Seco, M; Semennikov, A; Sepp, I; Serra, N; Serrano, J; Seyfert, P; Shapkin, M; Shapoval, I; Shatalov, P; Shcheglov, Y; Shears, T; Shekhtman, L; Shevchenko, O; Shevchenko, V; Shires, A; Silva Coutinho, R; Skwarnicki, T; Smith, N A; Smith, E; Smith, M; Sokoloff, M D; Soler, F J P; Soomro, F; Souza, D; Souza De Paula, B; Spaan, B; Sparkes, A; Spradlin, P; Stagni, F; Stahl, S; Steinkamp, O; Stoica, S; Stone, S; Storaci, B; Straticiuc, M; Straumann, U; Subbiah, V K; Swientek, S; Syropoulos, V; Szczekowski, M; Szczypka, P; Szumlak, T; T'Jampens, S; Teklishyn, M; Teodorescu, E; Teubert, F; Thomas, C; Thomas, E; van Tilburg, J; Tisserand, V; Tobin, M; Tolk, S; Tonelli, D; Topp-Joergensen, S; Torr, N; Tournefier, E; Tourneur, S; Tran, M T; Tresch, M; Tsaregorodtsev, A; Tsopelas, P; Tuning, N; Ubeda Garcia, M; Ukleja, A; Urner, D; Uwer, U; Vagnoni, V; Valenti, G; Vazquez Gomez, R; Vazquez Regueiro, P; Vecchi, S; Velthuis, J J; Veltri, M; Veneziano, G; Vesterinen, M; Viaud, B; Vieira, D; Vilasis-Cardona, X; Vollhardt, A; Volyanskyy, D; Voong, D; Vorobyev, A; Vorobyev, V; Voß, C; Voss, H; Waldi, R; Wallace, R; Wandernoth, S; Wang, J; Ward, D R; Watson, N K; Webber, A D; Websdale, D; Whitehead, M; Wicht, J; Wiechczynski, J; Wiedner, D; Wiggers, L; Wilkinson, G; Williams, M P; Williams, M; Wilson, F F; Wishahi, J; Witek, M; Wotton, S A; Wright, S; Wu, S; Wyllie, K; Xie, Y; Xing, Z; Yang, Z; Young, R; Yuan, X; Yushchenko, O; Zangoli, M; Zavertyaev, M; Zhang, F; Zhang, L; Zhang, W C; Zhang, Y; Zhelezov, A; Zhokhov, A; Zhong, L; Zvyagin, A 2013-11-12 A first observation of the decay$B_s^0 \\rightarrow \\phi \\bar{K}^{*0}$is reported from an analysis based on a data sample, corresponding to an integrated luminosity of 1.0 fb$^{-1}$of$pp$collisions at$\\sqrt{s} = 7 TeV$, collected with the LHCb detector. A yield of$30 \\pm 6B_s^0 \\to (KK)(K\\pi)$candidates is found in the mass windows$1012.5 < M(KK) < 1026.5 MeV/c^2$and$746 < M(K\\pi)< 1046 MeV/c^2$, corresponding to a signal significance of 6.1 standard deviations. The candidates are found to be dominated by$B_s^0 \\rightarrow \\phi \\bar{K}^{*0}$decays, and the branching fraction is measured to be$BF( B_s^0 \\rightarrow \\phi \\bar{K}^{*0} ) = (1.10 \\pm 0.24 (stat) \\pm 0.14 (syst) \\pm 0.08 (f_d/f_s ) ) \\times 10^{-6}$, where the uncertainties are statistical, systematic and from the ratio of fragmentation fractions$f_d/f_s$which accounts for the different production rate of$B^0$and$B_s^0$mesons. The fraction of longitudinal polarization in$B_s^0 \\rightarrow \\phi \\bar{K}^{*0}\$ decay... 14. Polychlorinated biphenyls in tree bark near a former manufacturing plant in Anniston, Alabama. Science.gov (United States) Hermanson, Mark H; Johnson, Glenn W 2007-05-01 Tree bark samples were collected to identify the relative amounts and congener profiles of atmospheric polychlorinated biphenyls dissolved into bark lipids from the gas phase in Anniston, Alabama, USA, where PCBs were manufactured from the 1920s until 1971. The area is heavily contaminated with PCBs: At least 4550 metric tons (mt) of PCB and 14000 mt of PCB distillation residue, known as Montar, remain buried in two landfills near the plant site. A minimum of 20.5 mt of PCBs were emitted to the atmosphere by the plant between 1953 and 1971 based on emissions figures for 1970. Bark results show that total PCB concentrations range over more than three orders of magnitude from 171927 ng/g lipid near the plant/landfill area, dropping exponentially to 35 ng/g lipid at a distance of about 7 km. The exponential trend is highly correlated (r=-0.77) and significant (ptree started growing after 1971 showing that atmospheric PCB concentrations remained high after PCB production ended. All PCB congener profiles show persistent congeners 31+28, 52, 66, 153, 138, and 180. Congener profiles from trees growing near the plant/landfill all have somewhat similar profiles but those growing during PCB production show high molecular mass compounds not usually found in the atmosphere and not found in younger trees, even in the most concentrated sample. We believe that high-temperature Montar disposal released high molecular mass PCBs into the gas phase which were dissolved into older tree bark lipids. 15. Anti-ulcerogenic activity of the methanol root bark extract of ... African Journals Online (AJOL) Cochlospermum planchonii (Hook f) is a common medicinal plant used in Nigeria traditional medicine for treatment of different ailments including ulcers. The anti ulcer activity of the root bark methanol extract of Cochlospermum planchonii was evaluated using different [ethanol, acetylsalicylic acid (aspirin), cold/restraint ... 16. Study of the betulin enriched birch bark extracts effects on human carcinoma cells and ear inflammation Directory of Open Access Journals (Sweden) Dehelean Cristina A 2012-11-01 Full Text Available Abstract Background Pentacyclic triterpenes, mainly betulin and betulinic acid, are valuable anticancer agents found in the bark of birch tree. This study evaluates birch bark extracts for the active principles composition. Results New improved extraction methods were applied on the bark of Betula pendula in order to reach the maximum content in active principles. Extracts were analyzed by HPLC-MS, Raman, SERS and 13C NMR spectroscopy which revealed a very high yield of betulin (over 90%. Growth inhibiting effects were measured in vitro on four malignant human cell lines: A431 (skin epidermoid carcinoma, A2780 (ovarian carcinoma, HeLa (cervix adenocarcinoma and MCF7 (breast adenocarcinoma, by means of MTT assay. All of the prepared bark extracts exerted a pronounced antiproliferative effect against human cancer cell lines. In vivo studies involved the anti-inflammatory effect of birch extracts on TPA-induced model of inflammation in mice. Conclusions The research revealed the efficacy of the extraction procedures as well as the antiproliferative and anti-inflammatory effects of birch extracts. 17. Bioassay Guided Isolation of an Antidermatophytic Active Constituent from the Stem Bark of Entada spiralis ridl International Nuclear Information System (INIS) Aiza Harun; Siti Zaiton Mat Soad; Norazian Mohd Hassan 2015-01-01 Entada spiralis Ridl. (Leguminoceae) is a liana or woody climber that grows in the wild in Malaysia and is locally known as Beluru or Sintok. The isolation and characterization of the chemical constituent from an active fraction have been carried out since no previous study has determined any active components from the stem bark. Our previous study had revealed methanol extract of E. spiralis stem bark exhibited promising antifungal activity against three dermatophytes strains, namely Trichophyton mentagrophytes ATCC 9533, Trichophyton tonsurans ATCC 28942 and Microsporum gypseum ATCC 24102 that cause skin infection. This study was performed to elucidate the structure of active constituent known as ester saponin from the active fraction of E. spiralis stem bark. The fractions were prepared using fractionation process and repeated antifungal test was conducted to identify the most active fraction. The structure elucidation of this compound was based on spectroscopic data ( 1 H, 13 C NMR, HMQC, HMBC and DEPT135) and comparison with literature. On the basis of spectroscopic analysis, the compound was identified as 28-α,L-rhamnopyranosyl-18,21,22-trihydroxy-12-en-29- (2-acetylamino-β-D-gluco-pyranosyl) triterpene ester. The current study provides important baseline information for the use of E.spiralis stem bark for the treatment of skin infection caused by the microorganisms investigated in this study. (author) 18. Are bark beetles chewing up our forests? What about our coffee? Science.gov (United States) A write-up for the Elsevier SciTech Connect blog on the recently published book entitled "Bark Beetles: Biology and Ecology of Native and Invasive Species," edited by Fernando E. Vega and Richard W. Hofstetter. The book was published by Academic Press in January 2015.... 19. BRIONONIC ACID FROM THE HEXANE EXTRACT OF Sandoricum koetjape MERR STEM BARK (meliaceae Directory of Open Access Journals (Sweden) Tukiran Tukiran 2010-06-01 Full Text Available An oleane-type triterpenoid, briononic acid was isolated from hexane extract of the stem bark of Sandoricum koetjape Merr. (Meliaceae. This structure had been established based on spectroscopic data (UV, IR, and NMR and by comparison with spectroscopic data of related compound that had been reported.   Keywords: Meliaceae, Oleane, Sandoricum koetjape Merr., Triterpenoid 20. Formulation of the extract of the stem bark of Alstonia boonei as ... African Journals Online (AJOL) Erah a Department of Pharmaceutics and Industrial Pharmacy, and bDepartment of Pharmaceutical Chemistry, Faculty of. Pharmacy, University of Ibadan, Ibadan, Nigeria. Abstract. Purpose: To formulate the extracts of the stem bark of Alstonia boonei, an important antimalarial herb, into tablet dosage form. Methods: Tablets were ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5949514508247375, "perplexity": 11899.314206063898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986657949.34/warc/CC-MAIN-20191015082202-20191015105702-00104.warc.gz"}
https://www.physicsforums.com/threads/state-the-trichotomy-law-formally.901217/
State the trichotomy law formally 1. Jan 22, 2017 mr_persistance 1. For arbitrary real numbers a & b, exactly one of the three relations hold: a < b, a > b, a = b. How do I state this more formally while also being correct? 2. The attempt at a solution a, b ∈ ℝ ( (a < b) ⊕ ( a > b ) ⊕ ( a = b) ) From this I made a truth table 2^3 entries long, and what we need is for the solution to only be true when exactly one relation is true and the rest false. The xoring works logically for 7 of the 8 entries, but fails when all three values are true. One solution off the top of my head is to simply add the following snippet ( ∧ ¬( (a < b) ∧ ( a > b ) ∧ ( a = b) )). That seems really ugly huh? But is that what the statement (exactly one of the three relations hold) turns into? I am a self learner, anyone want to help me improve? Thank you! 2. Jan 22, 2017 Logical Dog I think, You can also use this equivalent one: $$(a \wedge \bar{b} \wedge \bar{c} ) \vee (\bar{a} \wedge b \wedge \bar{c}) \vee (\bar{a} \wedge \bar{b} \wedge c )$$ where a,b,c are the three statements. 3. Jan 22, 2017 mr_persistance Nice BD! I really like this, I feel there's something really beautiful about your formula. Something about the balancing of truthness between all three statements. One being true, the other two are 'weighed' with it, and the other two ones must be the inverse of the true one for the 'weighing' to come out positive. All three are tested together to find the one of three statements that are true, and because of the nature of the test, if any two are true, there's a contradiction. Did you just think of this on the spot? 4. Jan 22, 2017 Logical Dog No, I played around with it on paper, then double checked the logical equivalence on wikipedia so I don't look like an annoying fool!!! I have bought schaums outlines of logic and its sitting around on my desk , I am just trying to apply what I know so I don't forget in the long run! =) I think logical equivalences are quite interesting and some quite beautiful and unexpected. 5. Jan 23, 2017 Staff: Mentor As you have it above, it looks perfectly fine. Is there some reason you wanted to represent the trichotomy using symbolic logic? 6. Jan 23, 2017 mr_persistance 1) Just a pure joy from symbolic manipulation ( my brain is weird :D ) 2) For more complicated statements, human language is ambiguous, so being able to translate a theorem from English to Logic is invaluable for my own type of thinking and understanding. 2a) I don't have to write as much 2b) It's easier to read the argument when it's a concise line of symbols 3) Some books state their model using a combination of set theory and logic instead of english and I want to be prepared 4) It's fun learn something new. 5) Maybe someday I'll have the luxury of exploring math foundations 7. Jan 23, 2017 Staff: Mentor The three inequalities you started with are symbols, and are completely unambiguous. Really? This -- $(a \wedge \bar{b} \wedge \bar{c} ) \vee (\bar{a} \wedge b \wedge \bar{c}) \vee (\bar{a} \wedge \bar{b} \wedge c )$ is easier to write than this -- x < y, or x = y, or x > y? Not to mention that what Bipolar Demon actually wrote was this (unrendered LaTeX): (a \wedge \bar{b} \wedge \bar{c} ) \vee (\bar{a} \wedge b \wedge \bar{c}) \vee (\bar{a} \wedge \bar{b} \wedge c ) x < y, or x = y, or x > y is pretty easy to read. The problem is that a few people overuse symbolism to the point that it obfuscates the point they are trying to make. If there's a reason for using symbols instead of a prose explanation, fine, but using symbolism for its own sake should be avoided, IMO. No argument there. Have something to add? Draft saved Draft deleted Similar Discussions: State the trichotomy law formally
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6085149645805359, "perplexity": 965.3712255406631}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102307.32/warc/CC-MAIN-20170816144701-20170816164701-00633.warc.gz"}
https://www.kelstonactuation.com/knowledge/screw-threads-and-mechanical-advantage
# Knowledge Base A screw is a mechanical system that converts rotational motion in to linear motion. In other worlds it converts torque (rotational force) in to a linear force. Although ‘screw’ refers to many helical devices, in its simplest form it consists of helical threads around a cylindrical shaft (male/external thread) and its purpose is to fasten components together. At least one of these components to be fastened together contains an internal/female thread and this may be formed during the installation of the screw if the material is softer than that of the screw. This differs to the definition of a bolt which is a similarly threaded component used to fasten un-threaded components together with the use of a nut. In both cases, rotation of the threaded component, through the application of torque, results in relative movement between it and the female thread and the screw/bolt moves along its axis. ### Examples of Screw Mechanisms: • Corkscrew • Archimedes’ Screw • Screw Tops for containers • Screw Jack • Vice A very important property of a screw thread is that it can be used to amplify force: A small torque applied to a screw can exert a large axial force on a mass. Therefore, a threaded component is said to produce a Mechanical Advantage. Here the resultant axial force due to an input torque is calculated from first principles. Pitch – The axial distance between the screw threads Lead – The axial distance travelled by the thread during 360° revolution of the screw or nut. The smaller the lead, the higher the mechanical advantage. (Lead = Pitch X No. of Starts) Mechanical Advantage – The ratio of axial output to rotational input force. The ideal mechanical advantage can easily be calculated as follows: From the conservation of energy, the work done on the screw by the rotational input force equals the work done on the mass by the resulting axial force: $$W_{in}=W_{out}$$ Work done equals the force multiplied by the distance over which it acts so, for 1 complete revolution, the work done is given by: $$W_{in}=2\pi r F_{in}$$ And the work done on the mass by the axial force is given by: $$W_{out}=l F_{out}$$ Therefore, it can be seen that the ideal mechanical advantage is given by: $$MA={F_{out}\over F_{in}}={2\pi r\over l}$$ Therefore the smaller the thread lead, l, the greater the mechanical advantage and the larger the force the screw thread can exert for a given applied rotational force. However, we know that the applied rotational force is supplied by a torque T, where: $$T_{in}=F_{in}r$$ This assumes that the lever arm for the torque is r, the radius of the thread. Substituting this in to the equation for mechanical advantage gives the resultant axial force out of the system: $$F_{out}={T_{in}2\pi \over l}$$ So it can be seen that the resultant axial force increases as the input torque increases and the thread lead decreases. It should be noted that calculated here is the ideal mechanical advantage and does not take into account the large frictional losses due to the large area of contact between the male and female threads. The property of a screw in that it can provide a mechanical advantage can be applied to components that utilise screw threads. Such components include Screw Jacks (also known as Jack Screw, a Worm Screw Jack, a Machine Screw Jack or a Lead Screw Jack). Specialising in screw jacks and custom-purpose linear actuators, Kelston Actuation's legacy of precision manufacturing stretches back more than 40 years #### Info Tel.: +44 (0) 117 947 3100 Email: [email protected] Mon to Thu 07:30 - 17:00 Fri 07:30 - 16:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7094770669937134, "perplexity": 950.8104090753526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363149.85/warc/CC-MAIN-20211205065810-20211205095810-00486.warc.gz"}
http://www.isr-publications.com/jnsa/articles-2710-hybrid-method-for-the-equilibrium-problem-and-a-family-of-generalized-nonexpansive-mappings-in-banach-spaces
# Hybrid method for the equilibrium problem and a family of generalized nonexpansive mappings in Banach spaces Volume 9, Issue 7, pp 4963--4975 Publication Date: July 23, 2016 • 535 Views ### Authors Chakkrid Klin-eam - Department of Mathematics, Faculty of Science, Naresuan University, , Thailand., Phitsanulok, 65000, Thailand. - Research Center for Academic Excellence in Mathematics, Naresuan University, Phitsanulok, 65000, Thailand. Prondanai Kaskasem - Department of Mathematics, Faculty of Science, Naresuan University, Phitsanulok, 65000, Thailand. Suthep Suantai - Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai, 50200, Thailand. ### Abstract We introduce a hybrid method for finding a common element of the set of solutions of an equilibrium problem defined on the dual space of a Banach space and the set of common fixed points of a family of generalized nonexpansive mappings and prove strong convergence theorems by using the new hybrid method. Using our main results, we obtain some new strong convergence theorems for finding a solution of an equilibrium problem and a fixed point of a family of generalized nonexpansive mappings in a Banach space. ### Keywords • Hybrid method • generalized nonexpansive mapping • NST-condition • equilibrium problem • fixed point problem • Banach space. •  47J25 •  47H05 •  47H10 ### References • [1] E. Blum, W. Oettli, From optimization and variational inequalities to equilibrium problems, Math. Student, 63 (1994), 123--145 • [2] P. L. Combettes, S. A. Hirstoaga, Equilibrium programming in Hilbert spaces, J. Nonlinear Convex Anal., 6 (2005), 117--136 • [3] T. Ibaraki, W. Takahashi, A new projection and convergence theorems for the projections in Banach spaces, J. Approx. Theory, 149 (2007), 1--14 • [4] S. Kamimura, W. Takahashi, Strong convergence of a proximal-type algorithm in a Banach space, SIAM J. Optim., 13 (2002), 938--945 • [5] F. Kohsaka, W. Takahashi, Generalized nonexpansive retractions and a proximal-type algorithm in Banach spaces, J. Nonlinear Convex Anal., 8 (2007), 197--209 • [6] W. R. Mann, Mean value methods in iteration, Proc. Amer. Math. Soc., 4 (1953), 506--510 • [7] K. Nakajo, K. Shimoji, W. Takahashi, Strong convergence to common fixed points of families of nonexpansive mappings in Banach spaces, J. Nonlinear Convex Anal., 6 (2007), 11--34 • [8] K. Nakajo, W. Takahashi, Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups, J. Math. Anal. Appl., 279 (2003), 372--379 • [9] S. Reich, Weak convergence theorems for nonexpansive mappings in Banach spaces, J. Math. Anal. Appl., 67 (1979), 247--276 • [10] W. Takahashi, Nonlinear functional analysis, Fixed point theory and its applications, Yokohama Publ., Yokohama (2000) • [11] W. Takahashi, Y. Takeuchi, R. Kubota, Strong convergence theorems by hybrid methods for families of nonexpansive mappings in Hilbert spaces, J. Math. Anal. Appl., 341 (2008), 276--286 • [12] W. Takahashi, K. Zembayashi, A strong convergence theorem for the equilibrium problem with a bifunction defined on the dual space of a Banach space, Fixed point theory and its applications, Yokohama Publ., Yokohama, 197--209 (2008), • [13] C. Zălinescu, On uniformly convex functions, J. Math. Anal. Appl., 95 (1983), 344--374
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8316884636878967, "perplexity": 1874.4818093256467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583857913.57/warc/CC-MAIN-20190122140606-20190122162606-00396.warc.gz"}
https://www.physicsforums.com/threads/doubt-about-expanding-universe.838117/
1. Oct 16, 2015 ### jssamp I can't be the only person to ever have this thought so I am hoping one of you star gazers can tell me what I am missing. I understand Hubble's theory and the idea of space itself expanding. My question is this. If we know the universe is expanding because of the redshift and the farther away we look, the more red the spectra are shifted, doesn't that mean in the past, expansion was greater. Since the more distant light sources are older? Where do we conclude that expansion is accelerating because more distant galaxies were moving away faster when their light started towards us. Is it just the accepted conjecture or did I miss something. Seems to me the universe was expanding faster then and is slowing now. TBH I really can't base any conclusion about current expansion based on observations of what happened 10 billion years ago. 2. Oct 16, 2015 ### Hornbein Think of light waves moving through space. As space expands the light wave is stretched. That means the wavelength is longer, ie. redder. So there is redshift independent of the motion of that galaxy way back when. 3. Oct 17, 2015 ### Chronos We observe the chemical composition of the source at the time of emission through spectroscopic measurements. We then match the pattern of emission and absorption lines to known elements, then measure the displacement of these lines relative to their rest position to derive their redshift. The spectral displacement obviously occurs after the emission lines were emitted [save for doppler shift due to proper motion], therefor happenend along the way to shift the lines further into the red. That something is called expansion of the universe. Any spectral shift due to proper motion is vanishingly tiny at cosmological distances. 4. Oct 17, 2015 ### Janus Staff Emeritus The fact that red-shift increases with distance, in of itself, just tells us that the universe is expanding and gives us no clue as to whether that expansion rate has increased or decreased over time. Let's first look at what we would see in an universe that expands at a constant rate. We'll use an imaginary universe where the expansion constant is 100 km/sec per light-year. Two stars 1 light year apart will being moving apart at 100 km/sec. If we had three stars, spaced 1 light year apart in a line, star 2 would be moving at 100 km/sec with respect to star 1, star 3 would be moving and 100 km/sec with respect to star 2 and thus star 3 is moving at 200 km/sec with respect to star 1. Every light-year between stars adds another 100 km/sec to the recession speed. So let's say that we measure a star to be 10 light-years away. we would measure its red-shift to give a velocity of 1000 km/sec. Off course we realize that this info is 10 years out of date. So that means that by at the time we make the measurement, that star is actually further than 10 light-years away. So what it tells us is that when that star was 10 light-years away, it was moving away at 1000 km/sec. In turn, if we see a star 30 light years away and measure its red-shift, and get a recession speed of 3000 km/sec, we know that 30 years ago, that star was 30 light years away and receding at 3000 km/sec. That matches the constant we gave above of 100 km/sec per light year and means that this universe was expanding at the same rate 30 years ago as it was 10 years ago, (or now). To determine that the expansion rate has changed over time, we need to see a difference in the speed to distance constant as we look at further stars. For example, if we were to measure a 1000 km/sec recession in that 10 light-year star and a 3010km/sec recession in the 30 light-year star, That means that 10 years ago the expansion rate was 100 km/sec per light year, but 30 years ago it was ~100.33 km/sec per light year, and the expansion rate of our universe had slowed during the intervening 20 yrs. Conversely, if that we measured a 2990 km/sec speed for that 30 light-year star, then we have to conclude that the expansion rate had increased over the 20 yrs. So basically, if you plot red-shift/recession speed vs distance: With a constant rate expansion, you get a straight line with red-shift increasing with distance. With a decreasing expansion rate you get a curve that deviates from that straight line in one direction. With an increasing expansion rate you get a curve that deviates in the other direction. In all of these cases, you see an increasing red-shift with increasing distance with an expanding universe, it is how red-shift changes with distance that distinguishes between the three cases. The fact that the universe was expanding was noted in 1929, but it wasn't until 1998 that the measurements that indicated that accelerating nature of this expansion were made. 5. Oct 21, 2015 ### jssamp Thank you, it finally makes sense. Now if I could just figure out the D field and polarization density. 6. Oct 21, 2015 ### rootone The inflation theory is popular and to some extent believable (to me), though I'm not sure if there is evidence as such. It suggests that early universe expansion rate was unimaginably fast, but lasted only for a very short amount of time. Things then settled down a bit ( a phase change), 'matter' as far as we know it came in to existence,. Things still are are expanding now according this, but it's a bit more relaxed. 7. Oct 21, 2015 ### bahamagreen That, and the other scenarios, are line-of-sight, which is appropriate for sighting light for red shift. What about the lateral dimensions, geometrically? That is, with expansion, pairs of past light sources (large galactic clusters) would have been "closer together" geometrically, yet we are observing their subtended arc on the sky in present time, so their "lines of sight" are bent closer at our receiving end and the sourcing ends have spread apart with expansion since emission... in expansion, distant emission sources will have been closer together than when they are subsequently viewed much later, so the lines of sight must have bent. With varying rates of expansion, I'm thinking that maybe this geometric bending of the lines of sight might not be but partially corrected. Similar Discussions: Doubt about expanding universe
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.840986430644989, "perplexity": 774.2790706675693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822930.13/warc/CC-MAIN-20171018104813-20171018124813-00810.warc.gz"}
https://people.math.osu.edu/hiary.1/
Ghaith A. Hiary I am an assistant professor in the Department of Mathematics at the Ohio State University. I graduated from the University of Minnesota, Minneapolis, with a PhD in Mathematics in August 2008, supervised by Andrew Odlyzko. My research is supported by a grant from the National Science Foundation DMS-1406190. You can reach me at [email protected] or [email protected]. ## Publications and preprints Note. The arXiv version likely differs from published version. ## Code, data, and experiments • An explicit van der Corput bound for $\zeta(1/2+it)$ (paper here). • An alternative to Riemann-Siegel type formulas (paper here). • A Deterministic $n^{1/3+o(1)}$ integer factoring algorithm (paper here). • I've implemented the amortized complexity algorithm to compute zeta, described here. The implementation is in C++, aided by python scripts to orgnize the multi-process computation. It also includes a separate program to extract "zeta data" -e.g. derivative, max,...- using band-limited interpolation. A database of (by now) ~600 million zeros near t = 10^28 (as well as smaller sets of about 200 million zeros at lower heights) has been obtained using the amortized algorithm, together with "raw data" files that allow quick extraction of further data that might be of interest. The computation took a few months on the riemann machine at U. Waterloo. • This is a previous coding collaboration, with Jonathan Bober to implement my $T^{1/3+o(1)}$-algorithm to compute zeta, which is described here, here, and here. The implementation is in C++. It was quite useful during the implementation to constantly compare answers obtained from the C++ code with answers obtained from a basic version of the algorithm that I implemented in Mathematica back in 2009. The implementation essentially consisted of coding up the various formulas that were specified in the papers describing the algorithm. • I'm developing a library for fast computations with Dirichlet and other L-functions. It consists of various algorithms that I have developed over the years, with a particular focus on large-scale computations. See here for an example. This is a fairly long-term project. ## Past teaching • Math 5591H & 5112: Honors Abstract Algebra (OSU, Spring 2017) • Math 5590H & 5111: Honors Abstract Algebra (OSU, Fall 2016) • Math 8120: Computational Number Theory (OSU, Spring 2016) • Math 2568: Linear Algebra (OSU, Spring 2016) • Math 5152: Introduction to Number Theory with Applications (OSU, Spring 2016) • Math 6193: Individual Studies Numerical Linear Algebra (OSU, Spring 2016) • Math 4193: Individual Studies (OSU, Fall 2015). • Math 2177: Mathematical Topics for Engineers (OSU, Spring 2015). • Math 5603: Numerical linear Algebra (OSU, Fall 2014). • Math 2173: Engineering Mathematics B (OSU, Spring 2014). • Math 5601: Essentials of numerical methods (OSU, Fall 2013). • Computational Mathematics, Mechanics, and Calculus tutorials (Bristol, Fall 2012 -- Spring 2013). • PM 340: Elementary number theory (Waterloo, Fall 2010). • Calculus I, Calculus II, Multivariable Calculus, Linear Algebra and Differential Equations recitations (UMN, Fall 2002 -- Spring 2006) ## Seminar OSU Number Theory Seminar Misc Pictures
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6334025263786316, "perplexity": 3104.1191441763826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104204.40/warc/CC-MAIN-20170818005345-20170818025345-00706.warc.gz"}
https://rd.springer.com/article/10.1007%2Fs10474-019-00995-6
Acta Mathematica Hungarica , Volume 159, Issue 2, pp 374–399 # Functions of bounded p-variation and weighted integrability of Fourier transforms • S. A. Krayukhin • S. S. Volosivets Article ## Abstract We study some properties of functions of bounded p-variation on $$\mathbb{R}$$ and its specific fractional moduli of smoothness, including the connection between p-variational and Lp best approximations and moduli of smoothness. These properties are used to derive the results concerning weighted integrability of Fourier transforms. ## Key words and phrases functions of bounded p-variation approximation by entire functions of exponential type p-variational and Lp moduli of smoothness Fourier transform weighted integrability ## Mathematics Subject Classification 41A17 41A30 42A38 ## Notes ### Acknowledgement The authors thank the referee for useful suggestions and remarks, which improved the revised version of our paper. ## References 1. 1. Bary, N.K., Stechkin, S.B.: Best approximation and differential properties of two conjugate functions. Trudy Mosk. Mat. Obs. 5, 483–522 (1956). (in Russian) 2. 2. Bergh, J., Löfström, J.: Interpolation Spaces. Springer-Verlag (Berlin-Heidelberg, An Introduction (1976) 3. 3. Bergh, J., Peetre, J.: On the spaces $$V_p$$ ($$0<p<\infty$$). Boll. Unione Mat. Ital. Ser. 4(10), 632–648 (1974) 4. 4. P. L. Butzer and R. J. Nessel, Fourier Analysis and Approximation. Vol. 1, Academic Press (New York–London, 1971)Google Scholar 5. 5. Gogoladze, L., Meskhia, R.: On the absolute convergence of trigonometric Fourier series. Proc. Razmadze Math. Inst. 141, 29–40 (2006) 6. 6. Gorbachev, D., Liflyand, E., Tikhonov, S.: Weighted Fourier inequalities: Boas conjecture in $$\mathbb{R}^n$$. J. Anal. Math. 114, 99–120 (2011) 7. 7. Gorbachev, D., Tikhonov, S.: Moduli of smoothness and growth prperties of Fourier transforms: two-sided estimates. J. Approx. Theory 164, 1283–1312 (2012) 8. 8. Hardy, G.H., Littlewood, J.E., Pólya, G.: Inequalities. Cambridge Univ, Press (Cambridge (1934) 9. 9. E. Liflyand and S. Tikhonov, Extended solution of Boas' conjecture on Fourier transforms, C.R. Acad. Sci. Paris, Ser. I., 346 (2008), 1137–1142 10. 10. Móricz, F.: Sufficient conditions for the Lebesgue integrability of Fourier transforms. Anal. Math. 36, 121–129 (2010) 11. 11. S. M. Nikols'kii, Approximation of Functions of Several Variables and Embedding Theorems, Springer (Berlin–Heidelberg, 1975)Google Scholar 12. 12. Onneweer, C.W.: On absolutely convergent Fourier series. Arkiv Mat. 12, 51–58 (1974) 13. 13. J. Peetre, New Thoughts on Besov Spaces, Duke Univ. Math. Depart. (Durham, USA, 1976)Google Scholar 14. 14. Sagher, Y.: Integrability conditions for the Fourier transform. J. Math. Anal. Appl. 54, 151–156 (1976) 15. 15. A. P. Terekhin, Approximation of functions of bounded $$p$$-variation, Izv. Vyssh. Uchebn. Zaved. Mat. [Soviet Math. (Iz. VUZ)], 2 (1965), 171–187Google Scholar 16. 16. Terekhin, A.P.: Integral smoothness properties of periodic functions of bounded $$p$$-variation. Math. Notes 2, 659–665 (1967) 17. 17. A. F. Timan, Theory of Approximation of Functions of a Real Variable, Macmillan (New York, 1963)Google Scholar 18. 18. E. Titchmarsh, Introduction to the Theory of Fourier Integrals, Clarendon Press (Oxford, 1937)Google Scholar 19. 19. W. Trebels, Estimates for Moduli of Continuity of Functions Given by Their Fourier Transform, Lecture Notes in Math., vol. 571, Springer (Berlin, 1977) 20. 20. Ul'yanov, P.L.: Series with respect to a Haar system with monotone coefficients. Izv. Akad. Nauk SSSR. Ser. Mat. 28, 925–950 (1964). (in Russian) 21. 21. Volosivets, S.S.: Convergence of series of Fourier coefficients of $$p$$-absolutely continuous functions. Anal. Math. 26, 63–80 (2000) 22. 22. Wiener, N.: The quadratic variation of a function and its Fourier coefficient. J. Math. Phys. 3, 72–94 (1924) 23. 23. Young, L.C.: An inequality of Hölder type connected with Stieltjes integration. Acta Math. 67, 251–282 (1936) 24. 24. A. Zygmund, Trigonometric Series, Vol. I., Cambridge Univ. Press (1959)Google Scholar
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9565951824188232, "perplexity": 5599.0173194291365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540527620.19/warc/CC-MAIN-20191210123634-20191210151634-00110.warc.gz"}
https://www.biostars.org/p/9530041/#9530052
Different line length in fasta file 1 0 Entering edit mode 5 weeks ago I am currently using VEP for variant annotation. I am facing an error as below: [E::fai_build_core] Different line length in sequence 'Pn9' I understand there is an issue with the difference in line length of Pn9 in the fasta file. However, the sequences of fasta file all have different line length. I don't get why there is an error specifically on Pn9. I have tried using both snpEff and Annovar but doesn't work. Any thoughts on this would be really appreciated. I am attaching the length of sequence here : Hope you would guide me to rectify this error. Fasta Ensembl-VEP annotation samtools • 337 views 0 Entering edit mode Did you unzip the reference file using bgzip? Try making fasta sequences single lined using seqkit. seqkit -w 0 seq input.fa -o output.fa. Use output.fa for further work. 0 Entering edit mode The file was unzipped. I used bgzip to zip before using it for VEP. Thank you for your suggestion. 0 Entering edit mode Sorry to say this but this didn't help ! 0 Entering edit mode Sorry to say this but this didn't help ! what's the error and what is the command line you are using ? 0 Entering edit mode I used the same command with my fasta file. But I am still getting the error : [E::fai_build_core] Different line length in sequence 'Pn9' 2 Entering edit mode 5 weeks ago I am attaching the length of sequence here why pasting an image when you can just copy and paste the text ? save the planet. Your problem is not related to the total length of each seq in the fasta, but as it is said in the error message Different line length in sequence 'Pn9' in your Pn9 there are some LINES with a different number of character. Like >Pn9 ATCGTACGATCGATCGA ATAGTGAC A AATCGCTGCTAGCTAACTG A 0 Entering edit mode Seems like I got confused between sequence length and line length. If that's the issue can you help me to rectify that? 0 Entering edit mode 0 Entering edit mode I used this with a line length of 30000 since the sequence length is large, but the issue still persists.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6094892024993896, "perplexity": 3321.3046448040436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571483.70/warc/CC-MAIN-20220811164257-20220811194257-00036.warc.gz"}
https://www.physicsforums.com/threads/inner-product-spaces.632898/
# Inner Product Spaces • Start date • #1 201 2 ## Main Question or Discussion Point What is the most motivating way to introduce general inner product spaces? I am looking for examples which have a real impact. For Euclidean spaces we relate the dot product to the angle between the vectors which most people find tangible. How can we extend this idea to the inner product of general vectors spaces such as the set of matrices, polynomials, functions? • #2 HallsofIvy Homework Helper 41,770 911 Actually, you have given the main motivation for inner product spaces: they generalize the Euclidean spaces with dot product. Of course, any finite dimensional space is isomorphic to the Euclidean space of the same dimension so we could just use the "dot product" defined on the Euclidean space. The reason for the more general definitions is to be able to work with infinite dimensional, and in particular, function spaces. But inner products are also important in adding a topology to vector spaces so that we can talk about "limits" and "continuity". The most general way we can introduce a topology is to add a "distance" or metric function, a function that maps pairs of vectors to a positive real number such that: i) $d(u, v)\ge 0$ and d(u, v)= 0 if and only if u= v. ii) $d(u,v)\le d(u, w)+ d(w, v)$ for any vectors, u, v, w. We interpret d(u, v) as the distance between vectors u and v (a "metric vector space" in which Cauchy sequences converge is called a "Frechet" space). That doesn't really make use of the algebraic properties of a vector space but leads to a "normed space" which has a norm, a function that maps a single vector to a real number such that i) $|v|\ge 0$ and |v|= 0 if and only if v= 0. ii) $|u+ v|\le |u|+ |v|$. iii) $|\alpha v|= |\alpha| |v|$ for any vector v and scalar $\alpha$ ($|\alpha|$ is the usual absolute value of $\alpha$). and we interpret |v| as the "length" of vector v. Of course, if we have a "length" we have a "distance" and can define d(u, v)= |u- v| so we have all the properties of a "metric vector space" plus additional ones. A normed space in which all Cauchy sequences of vectors converge is called a "Banach space". Other than the Euclidean spaces themselves, the most important example of a Banach space is L1, the set of all function, f, whose absolute values are Lebesque integrable on a given set, U: $\int_U |f(x)|dx$ is finite and we deifine $|f|= \int_U |f(x)|dx$. We define an "inner product" on a vector space as a function that maps pairs of vectors to scalars (typically real or complex numbers), satisfying i) $<i, i>\ge 0$ and <i, i>= 0 if and only if v= 0 ii) $<\alpha u, v>= \alpha<u, v>$ for any two vectors, u and v, and any scalar $\alpha$. iii)$<u+v, w>= <u, w>+ <v, w>$ Once we have an inner product we can define $|v|= \sqrt{<v, v>}$ and have all of the properties of a normed space (which includes all of the properties of a metric vector space) plus additional properties. An inner product space in whichy all Cauchy sequences of vectors converge is called a "Hilbert space". Other than the Euclidean space themselves, the most important Hilbert space is L2, the set of all "square integrable" on set U: $\int_U f^2(x)dx$ is finite. One can show that, if both f and g are square integrable on U, then $\int_U f\bar g dx$ is finite and we can define $<f, g>= \int_U f\bar g dx$. • #3 201 2 • #4 chiro 4,790 131 A simple way to think about the motivation of inner product spaces is that it gives a vector space geometry. With inner product spaces you can do projections which has not only a geometric importance, but an importance in terms of an abstract decomposition. • Last Post Replies 2 Views 726 • Last Post Replies 2 Views 2K • Last Post Replies 2 Views 2K • Last Post Replies 4 Views 2K • Last Post Replies 2 Views 14K • Last Post Replies 1 Views 3K • Last Post Replies 4 Views 569 • Last Post Replies 5 Views 3K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9766808152198792, "perplexity": 379.7522118290606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250589861.0/warc/CC-MAIN-20200117152059-20200117180059-00433.warc.gz"}
http://stats.stackexchange.com/questions/44332/what-does-it-mean-in-terms-of-regression-if-residuals-are-not-white-noise
# What does it mean in terms of regression if residuals are not white noise? I need help in answering this one, it is an exam question. - Are these supposed to be residuals from a fit of a time-series? That is often the context where the term "white noise" is used. – gung Nov 24 '12 at 15:22 Strictly speaking, the residuals of a regression are not white noise. Since each residual is a function of the entire data set, the residuals are lightly correlated. But ... there's correlation and there's correlation. Residuals can fail to be "white noise" if: • The regression model was not correctly specified. e.g $Y=a + bX + cX^2$ should have been chosen instead of a $Y=a+bX$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8343666195869446, "perplexity": 656.3111600722083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738095178.99/warc/CC-MAIN-20151001222135-00007-ip-10-137-6-227.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/301496/change-bibtex-to-biblatex
# Change bibtex to biblatex I'm writing my thesis with the University template. However, the template seems to only support bibtex but I really want to use commands like \citet and \citep in biblatex. Could anybody tell me how to make changes to the .cls file? Thanks so much! The citation seems to be controlled by these commands in thesis.cls file: \newif\ifrawbibliography \rawbibliographyfalse \def\thebibliography#1{% \ifrawbibliography \else \newpage \thispagestyle{empty}% % Switch singlespace to after the heading gets printed. \par\removelastskip\singlespace\par\removelastskip% GBG Oct 1993 \fi \list{[\arabic{enumi}]}% {\settowidth\labelwidth{[#1]}\leftmargin% \def\newblock{\hskip .11em plus .33em minus -.07em}% \sloppy\clubpenalty4000\widowpenalty4000% \sfcode\.=1000\relax } \let\endthebibliography=\endlist% why not \endsinglespace? • Roughly you would have to delete everything in the .cls file that has to do with bibliographies and then load biblatex instead. Then you need to make sure that things look like before. – moewe Mar 30 '16 at 6:07 • Thanks. So you mean I cannot modify the previous commands to control the margin or space? – Bowen Zhao Mar 30 '16 at 15:59 • You can absolutely do that, but you will probably not be able to use the exact same commands, so you cannot just copy the code from your snipped above, you will have to do some manual work. – moewe Mar 31 '16 at 8:59 • You can control the spacing/margins using the \defbibenvironment{bibliography} command. You can look at page 193 of the biblatex manual to see how it works. For using \citet, \citep and similar you have to use the biblatex natbib` option. – Guido Mar 31 '16 at 22:48 • If you solved your problem yourself, you might want to write and post an answer yourself. – moewe Apr 3 '16 at 16:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7405791282653809, "perplexity": 1633.7361477536758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986648343.8/warc/CC-MAIN-20191013221144-20191014004144-00045.warc.gz"}
https://asmedigitalcollection.asme.org/appliedmechanics/article-abstract/41/3/668/387510/Interlaminar-Stresses-in-Composite-Laminates-An?redirectedFrom=fulltext
An approximate elasticity solution is developed for the response of the finite-width, angle-ply composite laminate under uniform axial strain. The solution yields components of the displacement vector, strain tensor, and stress tensor in the form of sinusoidal-hyperbolic series. Results of the approximate solution are compared to numerical solutions of the exact equations for four-layer, symmetric laminate geometries. The nature of the boundary disturbance in the eight-layer laminate is also examined by the approximate solution. This content is only available via PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9619221687316895, "perplexity": 1131.654353549022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986653876.31/warc/CC-MAIN-20191014150930-20191014174430-00001.warc.gz"}
https://link.springer.com/article/10.1007%2Fs10107-018-1350-9
# Convergence rate of inertial Forward–Backward algorithm beyond Nesterov’s rule • Vassilis Apidopoulos • Jean-François Aujol • Charles Dossal Full Length Paper Series A ## Abstract In this paper we study the convergence of an Inertial Forward–Backward algorithm, with a particular choice of an over-relaxation term. In particular we show that for a sequence of over-relaxation parameters, that do not satisfy Nesterov’s rule, one can still expect some relatively fast convergence properties for the objective function. In addition we complement this work by studying the convergence of the algorithm in the case where the proximal operator is inexactly computed with the presence of some errors and we give sufficient conditions over these errors in order to obtain some convergence properties for the objective function. ## Keywords Convex optimization Proximal operator Inertial FB algorithm Nesterov’s rule Rate of convergence ## Mathematics Subject Classification 49M20 46N10 90C25 65K10 ## Notes ### Acknowledgements The authors would like to thank the anonymous reviewers for all their useful commentaries and advices and for pointing out some important references. ## References 1. 1. Apidopoulos, V., Aujol, J.F., Dossal, C.: The differential inclusion modeling FISTA algorithm and optimality of convergence rate in the case b$$\le 3$$. SIAM J. Optim. 28(1), 551–574 (2018) 2. 2. Attouch, H., Cabot, A.: Convergence rates of inertial forward–backward algorithms. SIAM J. Optim. 28(1), 849–874 (2018) 3. 3. Attouch, H., Chbani, Z., Peypouquet, J., Redont, P.: Fast convergence of inertial dynamics and algorithms with asymptotic vanishing viscosity. Math. Program. 168, 1–53 (2016) 4. 4. Attouch, H., Chbani, Z., Riahi, H.: Rate of convergence of the Nesterov accelerated gradient method in the subcritical case $$\alpha \le 3$$ (2017). arXiv preprint arXiv:1706.05671 5. 5. Attouch, H., Peypouquet, J.: The rate of convergence of Nesterov’s accelerated forward–backward method is actually faster than 1/k$$^{\wedge }$$2. SIAM J. Optim. 26(3), 1824–1834 (2016) 6. 6. Aujol, J.F., Dossal, C.: Stability of over-relaxations for the forward–backward algorithm, application to FISTA. SIAM J. Optim. 25(4), 2408–2433 (2015) 7. 7. Aujol, J.F., Dossal, C.: Optimal rate of convergence of an ode associated to the fast gradient descent schemes for $$b>0$$. J. Differ. Equ. (preprint available hal-01547251) (2017)Google Scholar 8. 8. Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, Berlin (2011) 9. 9. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) 10. 10. Chambolle, A., Dossal, C.: On the convergence of the iterates of the fast iterative shrinkage/thresholding algorithm. J. Optim. Theory Appl. 166(3), 968–982 (2015) 11. 11. Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 4(4), 1168–1200 (2005) 12. 12. Güler, O.: New proximal point algorithms for convex minimization. SIAM J. Optim. 2(4), 649–664 (1992) 13. 13. Holte, J.M.: Discrete Gronwall lemma and applications. In: MAA-NCS meeting at the University of North Dakota, vol. 24, pp. 1–7 (2009)Google Scholar 14. 14. Johnstone, P.R., Moulin, P.: Local and global convergence of a general inertial proximal splitting scheme (2016). arXiv preprint arXiv:1602.02726 15. 15. Kim, D., Fessler, J.A.: Optimized first-order methods for smooth convex minimization (2014). arXiv preprint arXiv:1406.5468 16. 16. Nesterov, Y.: A method of solving a convex programming problem with convergence rate O (1/k2). Sov. Math. Doklady 27, 372–376 (1983) 17. 17. Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Springer, Berlin (2013) 18. 18. Salzo, S., Villa, S.: Inexact and accelerated proximal point algorithms. J. Convex Anal. 19(4), 1167–1192 (2012) 19. 19. Schmidt, M., Le Roux, N., Bach, F.: Convergence rates of inexact proximal-gradient methods for convex optimization. In: NIPS (2011)Google Scholar 20. 20. Su, W., Boyd, S., Candes, E.J.: A differential equation for modeling Nesterovs accelerated gradient method: theory and insights. J. Mach. Learn. Res. 17(153), 1–43 (2016) 21. 21. Villa, S., Salzo, S., Baldassarre, L., Verri, A.: Accelerated and inexact forward–backward algorithms. SIAM J. Optim. 23(3), 1607–1633 (2013) © Springer-Verlag GmbH Germany, part of Springer Nature and Mathematical Optimization Society 2018 ## Authors and Affiliations • Vassilis Apidopoulos • 1 • Jean-François Aujol • 1 • Charles Dossal • 1 1. 1.IMB, UMR 5251Université de BordeauxTalenceFrance
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9364773035049438, "perplexity": 5911.377421117428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660070.15/warc/CC-MAIN-20190118110804-20190118132804-00619.warc.gz"}
http://www.physicsforums.com/showthread.php?s=57f742e3bc45c194ce4749b399f9098b&p=4158000
## moment generating function given m(t) = (1-p+p*e^t)^5 what is probability P(x<1.23) i know that m(t) = e^tx * f(x) m'(0) = E(X) and m''(0) , can find the var(x) should i calculate it using a normal table? PhysOrg.com science news on PhysOrg.com >> Leading 3-D printer firms to merge in $403M deal (Update)>> LA to give every student an iPad;$30M order>> CIA faulted for choosing Amazon over IBM on cloud contract Recognitions: Homework Help Quote by psycho007 given m(t) = (1-p+p*e^t)^5 what is probability P(x<1.23) i know that m(t) = e^tx * f(x) m'(0) = E(X) and m''(0) , can find the var(x) should i calculate it using a normal table? No, m(t) is not e^tx * f(x), but it is ∫{e^(tx) f(x) dx: x=0..∞}. Why would you use a normal table when the random variable is very far from normal? You can actually work out explicitly what is the distribution by expanding out the power and collecting terms in e^(kt) for k = 0,1,2,3,4,5. RGV Similar discussions for: moment generating function Thread Forum Replies Precalculus Mathematics Homework 3 Set Theory, Logic, Probability, Statistics 6 Calculus & Beyond Homework 7 Set Theory, Logic, Probability, Statistics 1 Calculus & Beyond Homework 0
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5910996198654175, "perplexity": 5714.710394689092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00049-ip-10-60-113-184.ec2.internal.warc.gz"}
http://mathhelpforum.com/geometry/171381-coordinate-geometry.html
1. ## Coordinate Geometry Write a formula for the distance from A = (1, 5) to P = (x, y), and another formula for the distance from P = (x, y) to B = (5, 2). Then write an equation that says that P is equidistant from A and B. Simplify your equation to linear form. 2. Originally Posted by thamathkid1729 Write a formula for the distance from A = (1, 5) to P = (x, y), and another formula for the distance from P = (x, y) to B = (5, 2). Then write an equation that says that P is equidistant from A and B. Simplify your equation to linear form. 1. Based on the Pythagorean theorem the distance between the points $P(x_P, y_P)$ and $Q(x_Q, y_Q)$ is calculated by: $d(P,Q)=\sqrt{(x_P - x_Q)^2+(y_P-y_Q)^2}$ 2. You'll get the equation $d(A,P) = d(B, P)$ i) Square both sides of the equation. ii) Expand the brackets. iii) Collect like terms - and you're done.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8833335041999817, "perplexity": 225.2166611301796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690307.45/warc/CC-MAIN-20170925021633-20170925041633-00342.warc.gz"}
https://www.physicsforums.com/threads/finding-equations-w-two-given-points.138026/
# Finding equations w/ two given points: 1. Oct 12, 2006 ### AznBoi Ok there are two give points and I need to find the equation: Points: (-3,0), (-.5,0) My work: y=(x+3)(x+.5) y=x^2+.5x+3x+15 Completed the square: -15= x^2+3.5x -15=x^2+3.5x+3.0625 -11.9375=(x+1.75)^2 y= (x+1.75)^2+11.9375 What did I do wrong?? 2. Oct 12, 2006 $$y = (x+3)(x+.5) = x^{2}+3.5x + 1.5$$ you multiplied 3 by 5, instead of 3 by 0.5 3. Oct 12, 2006 ### AznBoi ohhh lol thanks. 4. Oct 12, 2006 ### HallsofIvy Staff Emeritus Why did you assume a quadratic function? Two points determine a straight line. The constant function y= 0 passes through (-3, 0), (-5, 0). In fact there are an infinite number of functions whose graph pass through those two points. 5. Oct 12, 2006 Halls, he meant the zeros of the function. He is doing quadratic equations after all. Similar Discussions: Finding equations w/ two given points:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8671895861625671, "perplexity": 4240.399605835076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123530.18/warc/CC-MAIN-20170423031203-00434-ip-10-145-167-34.ec2.internal.warc.gz"}
http://mathhelpforum.com/statistics/201369-help-standard-deviation-expected-value-please-print.html
# Help with standard deviation & expected value please • Jul 25th 2012, 11:06 PM numbaoneazn1 Help with standard deviation & expected value please x -5 -1 2 7 10 P(x)0.15 0.08 0.22 0.18 0.37 Determine the expected value of x. Determine the standard deviation of x. • Jul 26th 2012, 02:45 AM Re: Help with standard deviation & expected value please Stick to the definitions, $E(X)=\Sigma_{i=1}^{n}x_iP(x_i)}$ and $SD(X)=\sqrt{\Sigma_{i=1}^{n}\left[x_i-E(X)\right]^2P(x_i)}}$ or $SD(X)=\sqrt{E(X^2)-E(X)^2}$ Next time please show where you cannot understand, because for me this seems quite basics (Nod)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.981838047504425, "perplexity": 4126.255166647879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00513-ip-10-171-10-108.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/55147/on-random-dirichlet-distributions
# On random Dirichlet distributions Fix a dimension $d\ge2$. • Let $Q_d$ denote the positive quadrant of $\mathbb{R}^d$, that is, $Q_d$ is the set of points $\mathbf{x}=(x_i)_i$ in $\mathbb{R}^d$ such that $x_i>0$ for every $i$. • For every $\mathbf{x}$ in $Q_d$, let $|\mathbf{x}|=x_1+\ldots+x_d$. • Let $\Delta_d$ denote the set of points $\mathbf{x}$ in $Q_d$ such that $|\mathbf{x}|=1$. • For every $\mathbf{a}$ and $\mathbf{b}$ in $Q_d$, define $\mathbf{a}\cdot \mathbf{b}$ in $Q_d$ by $(\mathbf{a}\cdot \mathbf{b})_i=a_ib_i$ for every $i$. • For every $\mathbf{a}$ in $Q_d$, let $\mathrm{Dir}(\mathbf{a})$ denote the Dirichlet distribution of parameter $\mathbf{a}$. The problem in a nutshell Fix $\mathbf{a}$ and $\mathbf{b}$ in $Q_d$. Choose a random parameter $\mathbf{u}$ in $\Delta_d$ with distribution $\mathrm{Dir}(\mathbf{a})$. Then choose a random point $\mathbf{X}$ in $\Delta_d$ with distribution $\mathrm{Dir}(\mathbf{b}\cdot \mathbf{u})$. My aim is to understand the (absolute) distribution of $\mathbf{X}$. Some more notations For every $\mathbf{a}=(a_i)_i$ in $Q_d$, $\mathrm{Dir}(\mathbf{a})$ is the absolutely continuous probability measure on $\Delta_d$ whose density $f(\ |\mathbf{a})$ at $\mathbf{x}$ is proportional to $x_1^{a_1-1}\cdots x_d^{a_d-1}$. More precisely, $$f(\mathbf{x}|\mathbf{a})=\Gamma(|\mathbf{a}|)\mathbf{x}^{\mathbf{a}-1}/\Gamma(\mathbf{a}),$$ with the following shorthands: $$\Gamma(\mathbf{a})=\Gamma(a_1)\cdots\Gamma(a_d),\quad \mathbf{x}^{\mathbf{a}-1}=x_1^{a_1-1}\cdots x_d^{a_d-1}.$$ The density $f_{\mathbf{a},\mathbf{b}}$ of the distribution of $\mathbf{X}$ is $$f_{\mathbf{a},\mathbf{b}}(\mathbf{x})=\int_{\Delta_d} f(\mathbf{x}|\mathbf{b}\cdot \mathbf{u})f(\mathbf{u}|\mathbf{a})\mathrm{d}u_1\cdots\mathrm{d}u_{d-1}.$$ Some special cases If $a_i=b_i=1$ for every $i$, $\displaystyle f_{\mathbf{1},\mathbf{1}}(\mathbf{x})\propto\int_{\Delta_d} \frac{\mathbf{x}^{\mathbf{u}-1}}{\Gamma(\mathbf{u})}\mathrm{d}u_1\cdots\mathrm{d}u_{d-1}.$ The case $d=2$ yields $$f_{\mathbf{1},\mathbf{1}}(x,1-x)\propto\int_0^1\frac{x^{w-1}(1-x)^{-w}}{\Gamma(w)\Gamma(1-w)}\mathrm{d}w=\frac1{\pi x}\int_0^1\left(\frac{x}{1-x}\right)^{w}\sin(\pi w)\mathrm{d}w,$$ hence $$f_{\mathbf{1},\mathbf{1}}(x,1-x)=\frac1{x(1-x)}\frac1{\pi^2+(\log[x/(1-x)])^2}.$$ Writing $\mathbf{X}=(X_1,X_2)$ with $X_1\ge0$, $X_2\ge0$ and $X_1+X_2=1$, this can be rewritten as the fact that, for every $x$ in $(0,1)$, $$P(X_1\le x)=P(X_2\le x)=\frac12+\frac1\pi\arctan\left(\frac1\pi\log\left(\frac{x}{1-x}\right)\right).$$ Are there other cases where the density $f_{\mathbf{a},\mathbf{b}}$ is (reasonably) explicit? Or, for example, where the moments $E(\mathbf{X}^\mathbf{n})$ of $\mathbf{X}$ with $\mathbf{n}=(n_1,\ldots,n_d)$ any $d$-uplet of integers, are (reasonably) explicit? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999470710754395, "perplexity": 87.47512722284024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461862134822.89/warc/CC-MAIN-20160428164854-00208-ip-10-239-7-51.ec2.internal.warc.gz"}
https://wattsupwiththat.com/2016/10/22/chaos-climate-part-4-an-attractive-idea/
# Chaos & Climate – Part 4: An Attractive Idea Guest Essay by Kip Hansen “The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible.” Introduction:  (if you’ve read the previous installments, you may skip this intro) The IPCC has long recognized that the Earth’s climate system is a coupled non-linear chaotic system.   Unfortunately, few of those dealing in climate science – professional and citizen scientists alike – seem to grasp the full implications of this.  It is not an easy topic – not a topic on which one can read a quick primer and then dive into real world applications.     This essay is the fourth in a short series of essays to clarify the possible relationships between Climate and Chaos.  This is not a highly technical discussion, but a basic introduction to the subject to shed some light on  just what the IPCC might mean when it says “we are dealing with a coupled non-linear chaotic system” and how that could change our understanding of the climate and climate science.   The first three parts of this series are:  Chaos and Climate – Part 1:  Linearity ;  Chaos & Climate – Part 2:  Chaos = Stability   ; Chaos & Climate – Part 3:  Chaos & Models.   Today’s essay concerns the idea of chaotic attractors, their relationship to climate concepts, and a short series wrap up. Definitions: (if already understand the first sentence below, you may skip the rest of this section) It is important to keep in mind that all uses of the word chaos (and its derivative chaotic) in this essay are intended to have meanings in the sense of Chaos Theory,  “the field of study in mathematics that studies the behavior of dynamical systems that are highly sensitive to initial conditions”.   In this essay the word chaos does not mean “complete confusion and disorder: a state in which behavior and events are not controlled by anything”  Rather it refers to dynamical systems in which “Small differences in initial conditions …yield widely diverging outcomes …, rendering long-term prediction impossible in general. This happens even though these systems are deterministic, meaning that their future behavior is fully determined by their initial conditions, with no random elements involved. In other words, the deterministic nature of these systems does not make them predictable.”  Edward Lorenz referred to this as “seemingly random and unpredictable behavior that nevertheless proceeds according to precise and often easily expressed rules.”   If you do not understand this important distinction, you will completely misunderstand the entire topic.  If the above is not clear (which would be no surprise, this is not an easy concept), please read at least the wiki article on Chaos Theory.   I give a basic reading list  at the end of this essay. Climate Attractors:  An Attractive Idea In the field known as Chaos Theory, the study of dynamical systems sensitive to initial conditions, there is a phenomenon known as an attractor.   Here I give the definition of this concept from the venerable Wiki: …an attractor is a set of numerical values toward which a system tends to evolve, for a wide variety of starting conditions of the system.  System values that get close enough to the attractor values remain close even if slightly disturbed. In finite-dimensional systems, the evolving variable may be represented algebraically as an n-dimensional vector. The attractor is a region in n-dimensional space. In physical systems, the n dimensions may be, for example, two or three positional coordinates for each of one or more physical entities; … If the evolving variable is two- or three-dimensional, the attractor of the dynamic process can be represented geometrically in two or three dimensions,   An attractor can be a point, a finite set of points, a curve, a manifold, or even a complicated set with a fractal structure known as a strange attractor. …. Describing the attractors of chaotic dynamical systems has been one of the achievements of chaos theory. In previous parts of this series, I have shared examples and images of various attractors.  The household funnel is the simplest physical example.  When held with the spout pointing down, any object entering the mouth of the funnel tends down and out the spout.  Exact placement in the funnel mouth doesn’t matter, all points lead to the spout. The funnel represents a type of attractor called a point attractor.  Once the system enters the attractor, the value evolves towards this single point. A cyclical attractor might have two or three values (ranges in some cases), cycling between them.  We see this in certain values of the Bifurcation Diagram, expressed as the Period Doubling that leads to chaos. From Part 2: When we graph this equation —  xr x (1 – x)  — with a beginning  “r” of 2.8, and an initial state value of 0.2, this is what we find: Even though the starting value for x is 0.2, iterating the system causes the value to x to settle down to a value between 0.6 and 0.7 – more precisely 0.64285 — after 50 or so iterations.   Jumping in at the 50th iteration, and forcing the value out of line, down to 0.077 (below) causes a brief disturbance, but the value of x returns to precisely 0.64285 in a short time: Kicking the value out of line upward at year 100 has a similar result.  Adjusting the “r”, the forcing value,  down a bit at year 150 brings the stable attractor lower, yet the behavior remains stable, as always. In the above example, the attractor of the system is a single value, to which the numerical value  tends to evolve even when perturbed.   In other systems, the graphed values might appear to spiral in to a single point or travel in complicated paths that eventually and inevitably lead to a single point. Following from the Bifurcation Diagram, one sees easily that at some values of “r” the system becomes cyclic, with periods of 2, 4, 8, 16 as “r” increases until chaos ensues, yet past that point one still finds points, values of “r”, where the period is 3 then 6, 12, 24.  Each vertical slice through the diagram presents one with the attractor for that value of “r”, which could be represented by their own geometric graphic visualization. Some dynamical systems do the opposite – no matter where you start them, one or more values races off to infinity. And some attractors, when viewed as plotted graphics are fantastically varied and beautiful to look at: Lorenz’s famous Butterfly Attractor (named for the two reasons 1.  It looks a bit like a butterfly’s wings and 2. In honor of the Butterfly effect), is often used as a proxy for “the attractor” of Climate (with the initial cap). See  this animated here. This error appears many times in the literature and in “popular science” explanations of both Climate and Chaos.  The latest version making the rounds, recently posted to blog comments repeatedly, are two related videos (parts of a 9 chapter film) from Jos Leys, Étienne Ghys and Aurélien Alvarez at chaos-mbath.org.  The films are lovely and very well made, well worth watching.  However, though they specifically explain that the Lorenz attractor is not in any way a representation of the climate, “In 1963, Edward Lorenz (1917-2008), studied convection in the Earth’s atmosphere. As the Navier-Stokes equations that describe fluid dynamics are very difficult to solve, he simplified them drastically. The model he obtained probably has little to do with what really happens in the atmosphere.”,   they go on to use it and the Lorenz Mill to make the suggestion that climate is predictable based on the finding that some of the features of the Lorenz Attractor and the Lorenz Mill are statistically probabilistic, hence predictable. In the second (Chapter 8) film, they specifically claim: “Take three regions on the Lorenz attractor (they could represent conditions of hurricane, drought or snow). If we measure the proportions of the time that trajectories with different initial conditions spend within these regions, then we find that for all trajectories, these proportions converge to the same numbers, even if the order in which the trajectories encounter the three regions is incomprehensible.  ….  By refocusing on statistical issues, science can still make predictions!” Readers who want the full blood-and-guts version of why this is nonsensical (other than in a trivial way) can read Tomas Milanovic’s Determinism and predictability over at Judith Curry’s excellent blog, Climate Etc. (Be sure to go through and read all the comments from Tomas Milanovic, David Young and Michael Kelly).   Those with more pragmatic tastes (and a more common, lower,  level of understanding of higher maths) can read my post (also at Dr. Curry’s) Lorenz validated. Let me just make a couple of obvious points for those who don’t have the time to watch the two 13 minute films or read the two Climate Etc. posts. 1. The Lorenz Attractor has [almost] nothing to do with climate or weather in the form used by Lorenz. “The Lorenz attractor arises in a simplified system of equations describing the two-dimensional flow of fluid with uniform depth and imposed temperature difference between the upper and lower surfaces.” — Richard McGehee 2. One must use very specific parameters to get the Lorenz equations to produce the Lorenz Attractor – other parameters produce single point attractors,. 3. Looking at arbitrarily selected “regions” of the Lorenz Attractor – and saying “they could represent conditions of hurricane, drought or snow” is disingenuous. The attractor has no snow, no rainfall or drought (as the equations are about fluid flow in two-dimensions under temperatures differences, it might describe some thing resembling a hurricane, if applied to a real physical system, such as the famous washtub experiments of atmospheric circulation). Regions of the Lorenz Attractor do not represent weather of any kind whatever. 4. Probabilistic analysis of the Lorenz attractor is interesting to mathematicians – but not to weathermen or climate scientists 5. The real world climate is chaotic, complex, bounded, multi-dimensional, and, if it has attractors, they will be themselves exist in multiple phase spaces – as Tomas Milanovic points out “the ability to compute phase space averages for particular attractor topologies changes nothing on the fact that the system is still chaotic and will react on perturbations in an unpredictable way over larger time scales.” 6. We have absolutely (literally absolutely) no idea what the precise, or even an,  attractor for the weather or climate system might look like, separate from the long-term historic climate record.  We have no reason to believe it would be statistically smooth or even if it would be amenable to statistical analysis. Given all that, the idea that the climate system might have the physical equivalent of a chaotic attractor, even if it is a strange attractor, is still quite appealing to many.  If it did, and we could discover it, mathematically or physically, we might then attempt some kind of statistical analysis of it to have some idea of the probabilities of what climate might do in the future.  But only probabilities, and “probabilities of what” is highly uncertain.  Remember, the climate covers the whole planet, and while we are mostly interested in what takes place close to the surface, it happens at all levels – a huge complicated area in both space and time.  The possibility of analysis that would reveal useful statistical probabilities for even general climate issues such as hard or mild northern hemisphere winters in anything but the near-present, certainly less than a decade,  is unlikely. Probabilities might be interesting mathematically.  Every gambler knows the probabilities of his game  – the chances – and knows that probabilities are not predictions or projections – a bet on lucky number 17 still has a one in 38 chance of winning a payout of 36x on every spin of the roulette wheel in Las Vegas – knowing the probabilities doesn’t give him any insight into what the spin will bring.  The action of the ball in a roulette wheel is chaotic in the sense of sensitivity to initial conditions – the exact speed of the spin of the wheel, the force the croupier gives to the ball, the exact point of release of the ball and its exact relationship to the spinning wheel (which spins in the opposite direction to that of the ball) at that precise moment.  The balls subsequent motion depends then on the exact conditions, speed and angle,  when it leaves the track and strikes the first deflector – and while that motion will be entirely deterministic, it simply sets the initial conditions for the next contact of the ball with another deflector or separator. The path of the ball during this spinning and bouncing is chaotic.   Rather quickly the ball runs out of energy and the ball is captured by one of the 38 numbered pockets in the wheel. In a fair wheel, with a large enough number of spins, the results are normally spread between all of the 38 possibilities, each coming up 1/38th of the time.  The probabilities can be perfectly known, yet the outcome in any one spin cannot be predicted – we can however, predict the outcome of ten thousand spins – more-or-less 1 in 38 for each number.  Such a probability prediction only allows the gambler not to make stupid mistakes – like thinking three reds in a row means the next spin must be black.  Such a set of generalized probabilities would be useless for climate or weather.  (I would, however, like to read an essay on the potential usefulness of climatic probabilities – what kind of probabilities some think might be discovered and how we might use them to our benefit.) (You would be surprised by how many instructional videos there are on systems for beating the casinos at roulette – all of them showing remarkable results.  Yet the makers of the films are not retired millionaire gamblers and one wonders why they don’t just tour casinos and make a mint with their own systems?) Even if, by some quirk of fate, we were able to stumble upon the structure of the multi-phasic attractor of Earth’s climate in the present day, which could then somehow magically be analyzed for statistical probabilities in a useful spatial and temporal way, such as seasonally for a specific region over the next decade, they would still just be probabilities, with only one actuality allowed.   After that, the minute alterations of the ever-changing initial conditions and determining parameters of the system would lead to unpredictable differences in the attractor or even a shift to a new attractors altogether.   These issues make the possibility of long-term useful predictions of the climate impossible. But can we make any useful predictions about the climate?  Of course we can! If there is a shift in the northern jet stream, we can predict things about near-term European seasonal weather.  If an El Niño develops, we can predict certain general weather and climate conditions.  If there is a persistent blocking high in one area, weather  is predictably affected downstream. Where does our ability to make these predictions come from?  From models?   Only models of the past – looking at the historical climate, recognizing patterns and associations, checking them against the records, and using them to make reasonable guesses about what might be coming up in the near future. An aside about Hurricane Forecasting using Models We can make weather/climate predictions about the near-future in some cases – hurricane-path prediction models are “pretty good” out several days, certainly good enough to issue warnings and for localities to make preparations, with a current average track accuracy of a bit better than +/- 50 nautical miles at 24 hours out.  The error increases with time  – at 48 hours 75 miles, at 72 hours 100 miles,  at 5 days it is 200 miles.  These results are about one half of the track errors in 1989.  This accuracy was enough to warn the barrier islands of Brevard County, Florida (Cape Canaveral, Cocoa Beach, Patrick Air Force Base) for the recent major Hurricane Matthew – the islands were evacuated based on 24 hour predictions of a direct hit. The difference of ~50  miles is illustrated here – note the times of the two images – the path projected in the left image is three hours earlier than the right image — the difference in the path can best be seen  in the blow-ups in the upper-left of  each of the two images: Live television broadcasters called this “the little 11th hour shift” that saved Cape Canaveral – the center of Matthew shifted east 20-30 miles, making the difference between the direct landfall of the eye of a major hurricane on the highly developed barrier islands and the effects of a near-miss pass 30 miles off-shore. You can watch this evolve in an animation in the National Hurricane Center’s archive of Hurricane Matthew. How can we best predict future climates? I maintain that the best chance of determining the probabilities of climate long-term future outcomes lies in the past, not in mathematical, numerical modeling attempts to predict or project the future.    We know to varying degrees of accuracy, temporally and spatially, what the climate was in the past, it has had tens of thousands of years to go through its iterations, season to season, year to year, and has left evidence of its passing.  The past shows us the actual boundaries, the physical constraints of the system as it really operates. Some maintain that because we are changing the composition of the atmosphere by adding various GHGs, mostly CO2, that the present and future, on a centennial scale, are unique and therefore the past will not inform us.  This is trivially true, the present is always unique (there is only one, after all).  But similar atmospheric conditions have existed in the past.  Has this exact set of circumstances existed in the past?  No.  If nearly these circumstances had existed, would this tell us what to expect?  No again, climate is chaotic, and profoundly dependent on initial conditions. This has nothing to do with the question of whether or not,  or how much, increasing CO2 concentrations will add energy (by retention)  to the climate system.  That question is simply a matter of physics – if GHGs block outgoing radiation of energy, then the blocked energy will remain in the system until such time as a new equilibrium is reached.  What the effects of that energy retention will be are what the various branches of science are investigating.   Making early decisions and assumptions — no matter how reasonable they appear — would be an error – along the lines of those made in physics regarding the expansion of the universe. So, why study the past to know the future?  It is my view, shared by others, that the climate system is bounded – limited in its possibilities – and that these boundaries are “built-in” to the dynamical climate system.  From the historical record, the climate system has an  apparent or seeming overall attractor, one could say, outside of which it cannot go (barring something like a catastrophic meteor strike).  Included in that attractor are the two long-term states known as Ice Ages and Interglacials, between which the climate switches, much like a two-lobed chaotic attractor.  We have little understanding of what causes the shift, but we know it takes place and how long interglacials of the past have lasted.  We also know that during the past interglacials, the average surface temperature of the earth has been remarkably stable – staying within a range of 2 or 3 degrees, producing a period during which Mankind  has thrived (for better or for worse), with apparent Warm Periods and Little Ice Ages (cooler periods).  There is no evidence other than the historic record for labeling this the or an attractor of the system — but it has the appearance of one. This sounds a bit like I am saying that we can’t predict the far-future climate because of chaos therefore we must look to the [chaotic] past to predict the climate.  Almost, but no prize.  It is the patterns of the past, repeating themselves over and over, that inform us in the present about what might be happening next.  Remember, chaotic systems have rigid structures, they are deterministic, and Chaos Theory tells us we can search for repeating patterns  in the chaotic regimes as well. Of course, this is exactly how weather forecasting was done prior to the advent of computers.  The experience of the weatherman, well educated in the past patterns for his/her region, would look to the available data on regional temperatures, air pressures, cloud type and cover  and wind directions,   and give a pretty good guess at the coming day’s and week’s weather.  The weatherman knew of bounds of weather for his locality for the calendar date, and with his knowledge of the weather patterns for his area, could feel confident of his general forecast. At this point I would have written about the problematic essence of numerical climate models – Chaos and Sensitivity to Initial Conditions.  I would have run some chaotic formulas, made tiny, tiny changes to a single initial condition and shown how those changes would make huge differences in outcome, then liken this to modern GCMs, general circulation models,  the type of climate model which employs a mathematical model of the general circulation of a planetary atmosphere or ocean. Serendipitously, a group at NCAR/UCAR did it for me and produced this image and caption (from a press release): With the caption:  “Winter temperature trends (in degrees Celsius) for North America between 1963 and 2012 for each of 30 members of the CESM Large Ensemble. The variations in warming and cooling in the 30 members illustrate the far-reaching effects of natural variability superimposed on human-induced climate change. The ensemble mean (EM; bottom, second image from right) averages out the natural variability, leaving only the warming trend attributed to human-caused climate change. The image at bottom right (OBS) shows actual observations from the same time period. By comparing the ensemble mean to the observations, the science team was able to parse how much of the warming over North America was due to natural variability and how much was due to human-caused climate change. Read the full study in the American Meteorological Society’s Journal of Climate. (© 2016 AMS.)” The 30 North American winter projections were produced as part of the CESM-Large Ensemble project, running the same model 30 times with exactly the same parameters with the exception of a tiny difference in a single initial condition – “adjusting the global atmospheric temperature by less than one-trillionth of one degree”. I will not repeat the essay here – but it contains what I would have written here.  If you haven’t read it, you may do so now:  Lorenz Validated. Wrap-Up Chaos Theory, and the underlying principles of the non-linearity of dynamical systems and ‘dependence on initial conditions’, inform us of the folly of attempting to depend on numerical climate models to project or predict future climate states in the long-term.  The IPCC correctly states that “…the long-term prediction of future climate states is not possible.” The hope that statistical analysis of climate model ensembles will produce pragmatically useful probabilities of long-term future climate features is, I’m afraid, doomed to disappointment. Weather models today produce useful near-present, daily forecasts (and even weekly for large weather features) on local and regional levels and may produce useful short-term-future weather predictions.  When coupled with informed experience from the past, weather/climate patterns, they may eventually provide regional next-season forecasts.  The UK’s MET claimed this result recently, bragging of 62% accuracy in back-casting general winter conditions for the UK based on pattern matching with the NAO.  Judith Curry’s Climate Forecast Applications Network (CFAN) is working on a project to make regional-scale climate projections.  Success of these longer range projections depends in large part on the definition used for “useful forecasts”. Hurricane path and intensity models have halved their error margins since 1990, achieving a useful average predicted-path accuracy of +/- 50 miles at 24 hours with an accuracy of +/- 200 miles at 5 days.  Hurricane Matthew’s 11th hour shift may be an illustration of these models having nearly reached the limit of accuracy. At the end of the day, a deep and thorough understanding of Chaos Theory, down at its blood-and-guts roots, is critical for climate science and should be included as part of the curriculum for all climate science students – and not just at the “Popular Science” level but at a foundational, fundamental level. # # # # # Intro to Chaos Theory Reading List: The Essence of Chaos — Edward Lorenz Does God Play Dice ? — Ian Stewart CHAOS: Making a New Science — James Gleick Chaos and Fractals: New Frontiers of Science — Peitgen, Jurgens and Saupe At WUWT: Chaos & Climate series:  Parts 1, 2 and 3 A simple demonstration of chaos and unreliability of computer models At Climate Etc.: A simple demonstration of chaos and unreliability of computer models Determinism and predictability Chaos, ergodicity, and attractors Spatio-temporal chaos Lorenz validated # # # # # Author’s Comment Policy: Since I will still be declining to argue, in any way, about whether or not the Earth’s climate is a “coupled non-linear chaotic system”,  I offer the above basic reading list for those who disagree and to anyone who wishes to learn more about, or delve deeper into, Chaos Theory and its implications. Also, before commenting about how the climate “isn’t chaotic”, or such and such data set “isn’t chaotic”, please re-read the Definitions section at the beginning of this essay (second section from the top).   That will save us all a lot of back and forth. I hope that before reading this essay, which is Part 4, that you have first read, in order, Parts 1 ,  2, and 3 .  As the essay Lorenz Validated was originally intended as part of this essay, it is suggested reading. # # # # # . ## 183 thoughts on “Chaos & Climate – Part 4: An Attractive Idea” 1. rbabcock says: While it is chaotic, there are still forces that cause macro changes that should be predictable. For instance I can reliably predict in the temperate zones it will be hot in the summer, cold in the winter, spring will start cool and end warm and fall will start warm and end cool. How hot and how cold in each of the seasons is up for grabs. • Kip Hansen says: rbabcock ==> This is a common reaction to the concept of Chaos, Weather and Climate, to the statement that climate is chaotic and can not be predicted long-term, and it is not wrong for you to bring it up. What you say is true, but trivial — the questions are things like “What will the Earth’s climate be like in 100 years?” and “What effect will increasing GHGs have on the climate, at any given rate, in 50 years?” “Will the drought in the Southwestern United States continue for another decade?” “If GW continues, will the US wheat belt become a desert?” It is numeric climate modeling and its ability to predict long-term climate states, to answer questions like these, that are the issue. Of course we can all predict trivial things about weather and climate — children can do it by pattern recognition “Summers are hotter than Winters”. We do pretty good weather prediction on the basis of a few days to a week, but because of the Chaos problem, we can not reliably predict such things as “a hard winter” or a “rainy spring”, even a year ahead, no less climate 50 or 100 years from now. • rbabcock says: My point is there are macro forces that cause things to happen in predictable ways and they are not trivial. Something caused the Maunder Minimum. Something caused the last ice age and it was enough to form ice sheets 5000′ think. How we got to the end point wasn’t a straight line as there were no doubt lots of feedback mechanisms and this is where chaos comes into play, but whatever was causing the overall trend kept pushing the ball down the road. The real problem is we don’t know yet what the big forces are that change the course of the ship. If a quiet Sun causes substantial global cooling, there will be a lot of buffers in play that gives us warm and ever colder temperatures until either the Sun becomes active again or a final equilibrium is achieved. • Richard Petschauer says: The chaos theory has got more to do with weather and random local variations than long term global average temperatures, which exists even if we can’t measure it. Let’s not make things hopelessly complicated. Yes, temperature vs. heat transfer is nonlinear because things like changes in water states. But global net heat in and heat out still dominates. • Kip Hansen says: Richard Petschauer ==> If climate science were ONLY about long-term global average surface temperature (which it is not), then we would not have to worry so much about the fact the the climate system is a chaotic dynamical system. You are right about that. • george e. smith says: Your cone gizmo does not seem to address the point, as to whether the trajectory of the system into the entrance aperture of the cone makes any difference. A related question might be, whether photons from some remote location can enter the input aperture of the cone from a full 2 pi steradian hemisphere, and then assuming no losses inside the cone, all exit from the nose of the cone at the bottom. The requirement of no losses, implies that the cone walls cannot be absorbing. They could of course be scattering, with any arbitrary scatter function reflectance, and in that case it is clear that many of those photons must scatter backwards, and hence exit the cone from the front. So clearly in that case, all photons cannot exit the bottom of the cone. So what then if the cone surfaces, are perfectly specular perfect reflectance surfaces, so once again no absorption losses. In this case, it can be shown that once again not all of the photons can exit from the bottom of the cone. In fact only a small fraction of the photons can exit from the bottom, and that fration is simply the area ratio of the apertures. So if the top of the cone has ten times the surface area as the bottom of the cone; regardless of the shapes of those (presumed flat) areas, then only 10% of the photons can exit the cone from the bottom. The number of photons per unit area in the 2 pi steradian input and output beams must be conserved. This is a consequence of the second law of thermo-dynamics, so for the last instance 90% of those photons must be reflected back out of the top of the 10:1 funnel. So a real world cone in a real world physics, cannot direct all photons through that cone from input to output one way. So what of your attractor; does it have any such constraints ?? G • Kip Hansen says: George ==> Its just a kitchen funnel — which my wife uses to fill vinegar bottles. Nothing more, It is an analogy. • I haven’t studied chaos theory but I have studied systems and control theory extensively, and that gives me at least some trained intuition in systems behavior. I think the main point here is that in the coupled nonlinear chaotic model there are no “big forces that control the course of the ship” outside the feedback system, as your comment seems to say. If there were, initial conditions would pretty much be irrelevant. Rather, the “big forces” are just other feedback loops–stored system energy that is returned to the system in all kinds of unexpected ways and at unexpected times. By the way, I spent a career in the U. S. Coast Guard and also have a trained intuition about the big forces that control the course of a ship. It’s a useful metaphor, but it would be more useful if we pictured a helm controller that set desired course as the unknown weighted sum of inputs from voting boxes (like in America’s got talent) distributed not only to each crew member, but also to spouses and sweethearts, local fishermen, drug smugglers, and random people standing on the pier on the day of departure. • Kip Hansen says: deanfromohio ==> I hold a “Captain’s license” from the USCG and understand the metaphor. The problem can be likened to that of the Iron Mike (as we used to call the autopilot in the Merchants) — it has orders to steer a successful course by compass, but is kicked off course by a whole lot of influences (wind, waves, currents, bent rudders, and in twin-screw ships uneven props, uneven engine speeds, etc. It manages by making repeated corrections to the results, taking each instant (with programmed delays for the ship design to correct for “meeting the helm” to prevent over-steering). Regardless of the chaos of the seas, it manages in the long run to get you where you are going. Climate likewise is “self-steering” [sort of], being constrained to certain patterns (it can’t make tropical poles and arctic equators, for instance). But like the Iron Mike, climate will make the adjustments in a seeming random pattern (I have watched the very sophisticated auto-pilot work on my sailboat for too many hours — making minor adjustments port and stbd that seem silly, but get us there none-the-less.) What we don’t understand are the nature of all the influences on the climate, the size of those influences, the sign of those influences — then tack on the fact that the overall system itself is prone to chaos and all we can do is watch it work, really. There will be no predicting long-term climate states with numeric models. • Thanks, Kip! It’s good to “meet” another mariner. Autopilots are nice, when they work. Also, it takes a seaman’s eye, so to speak, to recognize when the conditions exceed the capabilities of the autopilot, making it safer to steer by hand. Large following seas come to mind. Short of a divine hand, however, we are pretty much on climate “autopilot” but can’t pretend we are the ones in control. Securing for sea always beats counting on any particular weather. • george e. smith says: “””””…… The IPCC has long recognized that the Earth’s climate system is a coupled non-linear chaotic system. …..””””” So one from column A, one from column B, and one from Column C. That’s the same recipe that Mickey Spillane uses to write his whodunit novels. It is still in use today, by overly paid think tanks to come up with catchy names for new companies who then want to have an IPO and collect a lot of free money from eager investors who have too much money and not too much sense of gobbledegook. The pictures are pretty, but to me they are far too “smooth” to represent anything to do with the climate. And I prefer fractals anyhow; prettier pictures, and even less informative. G • Kip Hansen says: The Old Man ==> Did you spend hours, late into the night, watching as your computer added one pixel at a time to a complex image of a fractal on your green screen monitor? …. I admit I did. • LewSkannen says: Yes. I did. Probably takes seconds with my new computer if I still had the old code. • Kip Hansen says: LewSkannen ==> A couple of years ago I found all my old Basic chaos programs, and copied them to modern storage media. I found a Windows Basic emulator and fired up a few of them. Instead of the expected slow appearance of some attractor or fractal, they popped up as if they were jpeg images. I had to go in and add time wasting loops between iterations to slow them down enough to make them interesting. 🙂 • Crispin in Waterloo but really in Bishkek says: Well that’s embarrassing. I did it too, with my children. We had a Bondwell with no HDD. We used to let the fractal program run overnight and edited the code to produce a quickie pic to see what was worth drawing. DRDOS 3.4.1. The long term, vaguely predictable effect, was that one son became a talented server manager at a huge company and another is both a robotics technologist and a computer science engineer. I have always attributed this to them having access to a computer in the 80’s in spite of living in a very poor developing country in Africa. • catweazle666 says: Indeed Kip – and then along came a 386 running Fractint! • The Old Man says: Sure did… back in the days of the 8087’s • The Old Man says: Sure did.. back when the 8086 got the 8087 coprocessor, it seemed an order of magnitude faster… 🙂 • steveta_uk says: I used to leave the Atari running overnight doing stuff like fractals and ray tracing – it was amazing how realistic reflections could look! • Jason Calley says: Holy cow! I thought I was the only single-pixel watcher back when. Then there was always the excitement of finally getting to sleep and then waking up to go check what has changed over night. My wife and son thought I was a bit crazy. (Hmmm… maybe I am still guilty as charged!) • Kip Hansen says: Jason Calley ==> Mine knew I was nuts….but always wanted to see the latest results too. Kinda miss those old days…. • Kip Hansen says: steveta_uk ==> It was an exciting time — all that computing power, ability to write your own programs, delving into barely known territory….loved it. • The Old Man says: stevta_uk==> Realistic for sure; which, for those of us who patiently played with the Atari’s et al on the long haul exploration discovered that everything is connected in ways that are far beyond the addictive hints supplied by our mathematical toolboxes. Compelling. Still is. Is the fern in the article real, or was it a computer generated Barnsley fern? Better still, does it matter… Once more unto the breach, dear friends, once more; .. 🙂 • noaaprogrammer says: I still have the students in my computer graphics class generate fractals using IFS tables. 2. RPT says: Mr Hansen Any chance that you could provide this article series, and the Lorenz Validated, as PDF? • Kip Hansen says: RPT ==> Email me at my first name at the domain i4 decimal net, and I’ll see what I can do. • RPT says: Thank you, and have great boat trip in NC! Your simplified narrative of Chaos Theory (really Chaos Branch of Mathematics) is really great; Simplifying something that isn’t simple is not easy, and is really the ultimate test of deeper understanding. This leads me to me how I blew my greatest chance of scientific achievement. When en undergraduate in university, I was doing an extended exercise that required a CFD solution of the Navier Stokes, running a RANS with the Pi-Epsilon to fill inn the heavy part of it. However, as an undergraduate, I was limited to 1 minute CPU at the Univac. This didn’t cut it, so I punched a few more cards (anybody remember punch cards?) to dump the status after 55 seconds, and then start a new run picking up the results from the previous run and do another minute and so on (I ended up loosing control of the process, and was happy to just being told the cost of running the Univac coninously for 2 days in my last run before I was allowed more CPU time) But I didn’t get good results: I started dumping the intermediate results with a specific (low) number of decimals, then increased decimals, increasing still to ordinary machine precision, then to double precision. Each time I ended up with different result, a result that ALLMOST converged! So if I had understood the significance, and Lorentz hadn’t seen the same significance in a very similar way about 10 years previously… Guess the difference between a genius and the rest of us is that when we are sitting under an apple tree being hit by a falling apple, the true genius discovers and quantifies gravity, while the rest of us just gets annoyed! 3. Marcus says: ..Wow, great post, but man, did that hurt my head !! I’d be willing to bet that most CAGW believers would not even bother to take the time to read the entire post !! Awesome job…again !! P.S. ( I’m confused on this comment) …” There is no evidence other than the historic record for labeling this the or an attractor of the system — but it has the appearance of one.” Typo or a word missing ? + 199 gold stars • Kip Hansen says: Marcus == “” There is no evidence other than the historic record for labeling this the or an attractor of the system — but it has the appearance of one.” Typo or a word missing ?” No, just complicated sentence structure. I’ll try it again: “There is no evidence that the two-lobed, Ice Age/interglacial, apparent structure is the or an attractor for the climate system. The only supporting facts are in the historoical record, in which the IA/Integlacial alternation looks like, has the appearance, of an attractor. ” There is a vast difference between something that “looks like” and when something is “evidence for”. 4. Marcus says: How many mathematical variables would an actual “Climate Model” need to use to be somewhat accurate ?? My guess, 1,000,000…Give or take a 1,000 !! Beyond possible today…IMHO… 5. actually weather and climate does look chaotic, on a small time scale – say of 1 year – like in this example: https://i2.wp.com/oi64.tinypic.com/vyxdld.jpg but on a longer time, looked at in the right periods, the weather is as predictable as a clock, an old pendulum clock staggering really, if it were not so, you and I would not be alive today • Kip Hansen says: HenryP ==> Yours is an example of “long term averages” and pattern matching. Such patterns can help weather forecasters predict such things as multi-year drought. • Crispin in Waterloo but really in Bishkek says: RSA summer rainfall areas will see progressive drying, peaking in 2021 and then return to “average” conditions by 2025. The cycle is metonic. • george e. smith says: Well HenryP, why don’t you input your set of data points; those same exact numbers from your first graph, into M\$ Excel. And click on a different graph type to give you a sort of cubic spline fit to your data points. Your graph as plotted is not even a graph of a band limited signal, so it is under-sampled by who knows how many orders of magnitude. In other words it is total BS. And no it is not even remotely chaotic; just noisy or fluctuating. Why is it that people who seem to work on “climate science” are the only people in the universe who claim to be doing science who are completely unaware of the Nyquist sampling theorem, or any of the other mathematical bases for sampled data system theory. Real functions of real variables in the sense of real physical systems, do not EVER have points of infinite curvature. G • “unaware of the Nyquist sampling theorem” Yes, I’ve noticed this too. Another example is the ice cores and other paleo data whose sample rates vary between a few years and a few centuries all within the same dataset and this disparity is rarely accounted for. This is far from surprising since denying first principles is what keeps the CAGW side of the debate from imploding. 6. Alan Robertson says: I’ve read warmist rationalizations about why Lorenz’ work doesn’t apply. Their thinking was so shallow and easily refuted, that I didn’t bother to bookmark the item(s). Wish I had… would make for a good laugh, this afternoon. • Alan Robertson says: Sorry about that post. I’ve done the same thing that I was railing against… assertions with no proof. A quick run at my search engine produced hits. One of the 1st was at SkS and I won’t link to that site. • Kip Hansen says: Alan Robertson ==> No worries. The fact that there are climate people who don’t understand this topic is the reason I wrote this series. As you read through the comments, you’ll see that some still do not get (or refuse to accept) the important fundamental situation with Chaos and Climate. 7. Terrific series. Nice close. 8. The quote from the TAR which I see is an attractor here, goes on: The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible. Rather the focus must be upon the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. Addressing adequately the statistical nature of climate is computationally intensive and requires the application of new methods of model diagnosis, but such statistical information is essential. And this is shown by the ensemble of 30 model runs for winter trend in North America. Chaos means that a very small variation in initial conditions produces different results. That means that there is really no useful connection between those initial conditions and the results. The perturbations, instead, are a good emulation of natural variation, which is also the part of weather history that we can’t predict. And the point of their display is that there is indeed an attractor, shown as the ensemble mean in the second last plot. That is the TAR’s “statistical information”. And you can compare that with the observation, in the final plot. Of course the correspondence is not exact. That is because the observations are of a system which has natural variation. • Kip Hansen says: Nick ==> I’d love to see a comprehensive essay on what sort of useful predictions you feel would come from such work. It is a gross misunderstanding of Chaos Theory to say “And the point of their display is that there is indeed an attractor, shown as the ensemble mean in the second last plot.”. The ensemble mean does not fulfill, or even approach, the definition of a chaotic attractor. The mean of 30 chaotic outputs is just that and nothing more — the mean of 30 chaotic outputs. • “The ensemble mean does not fulfill, or even approach, the definition of a chaotic attractor” It is not the definition, but it is a way of estimating. Your Wiki quote starts: “an attractor is a set of numerical values toward which a system tends to evolve, for a wide variety of starting conditions of the system” and you estimate those values by sampling, and choosing, as the statisticians say, a measure of central tendency, of which the mean is simplest. You can see this in your butterfly plot. What is the attractor here? It isn’t the whole plot; it is typified by the purple bent figure 8 traced out in the middle (though it is more complicated here, being a strange attractor). How can I say that? Because the eye does local averaging; that is where the trajectories are most dense. But to put a number on it, you would average over cross-sections, with maybe a down-weighting of outliers. Or you might take a median, or some such. • Kip Hansen says: Nick ==> The CESM-LE runs simply do not tend to evolve toward the mean at all….they diverge from the mean, get further away from it, with each iteration. If the CESM-LE experiment on NA Winters were repeated with a time span of 100 years, the ensemble members would be further away from the mean, there would be a greater spread. There may be some meaning, something to be gained, from an Ensemble Mean, but it certainly does not represent a chaotic attractor for the model. • Eric Slattery says: Nick, The chaotic attractor also has to do with how the system proceeds in time and space, which is represented as the attractor in Phase Space. Ensemble temperatures may not yield anything in terms of an attractor for climate, especially with the amount of measurement error. Plus when you factor in corrections for changes in locations they have made, there’s a boatload of measurement error before you can even begin to start making sense of and running dimensional analyses on data such as this. And finally, the time between measurements is also fairly huge (in terms of trying to understand the system) and you miss a lot of information with only a 1-2 recordings or averages of a days recording, instead of having many temps for 1 day. Yes it becomes more of a burden and eats up more computer time, but if you want less error, have to bite the bullet. Especially with the amount of money they are asking everyone in the world to coughup to fight CO2. But, measuring the multifractal properties of climate is far outside the scope of possibility right now, mostly due to storage requirements, computing requirements, and some further mathematics that needs to be developed beyond multifractals, whatever that may be. • “they diverge from the mean, get further away from it, with each iteration” The systems people like to draw here are autonomous – they equations themselves are not dependent on time, so you can just let them go with the expectation that nothing has changed when they come back again. There is no need to do different runs; you can just let the same run go on and on. That could also be true for climate with constant forcing, but for real climate it isn’t. That is why, to see the range of trajectories, you have to have multiple runsno evidence of divergence over runs. I think you are confusing it with divergence over time. One thing about your series – I have not seen you show an actual attractor – just a series of trajectories. How would you ever get those “set of numerical values toward which a system tends to evolve” without some measure of central tendency? • ” That is why, to see the range of trajectories, you have to have multiple runsno evidence of divergence over runs.” Words went missing. I meant “That is why, to see the range of trajectories, you have to have multiple runs. I see no evidence of divergence over runs.” • Kip Hansen says: Nick ==> I’m not sure if this is going to end up to be constructive as we are talking past one another a bit here. I am refering specifically to the 30 Earth’s image and the paper that produced it. 30 runs of identical code and parameters with infintesimal alteration of a single initial value. The 30 runs diverge across the phase space of possibilities. More runs might show more divergence, longer would definitely show more divergence — until the boundaries of possible climates allowed by the model are reached. More runs would not tend to converge on the ensemble mean — running each run longer would also not tend to converge on the ensemble mean. I would like to see you write up what it is you think about the usefulness of statistical analysis of chaotic results — multiple runs of identical climates as in the CESM-LE. • Mike Jonas says: More runs might show more divergence, longer would definitely show more divergence — until the boundaries of possible climates allowed by the model are reached.“. Kip Hansen, you nail it in this statement, but many (particularly those in the warmist camp) will miss the significance. The point is that the model results are not actual possible climates, and they do not in any shape size or form represent anything real on this planet now or in the past or in the future. They are simply the results that are allowed by the model. • Kip, I’ve written a response below. • Kip, I have written an initial post here on chaos, CFD and GCMs. I’m planning a follow-up which will have a gadget that lets you generate your own Lorenz butterflies, varying the initial points and parameters, and manipulate the images in apparent 3D (WebGL). I think there are points to illustrate on the relative importance of initial conditions (small) and equation parameters (large), and of trajectories vs attractors. • Kip Hansen says: Nick ==> Having arrived, at last, at our winter digs, I’m just catching up. I’ll check out your post ands discuss on your site. Thanks for writing it. • Sorry but I can’t accept that. A straight average is an assumpsion of usefulness. Why would each result not need to carry some weighting in the everaging process that we have no understanding of how to evaluate. Just look at an extremely simple 3-body problem with 2 fixed large masses and a single moving small mass (plenty of simulations available on the internet). Averaging say 30 runs with close initial starting points would have no validity of, in any way, representing an expected system. It would be of no value/meaning at all except the obvious that total kinetic and gravitational potential energy will be a constant. We have one climate system and I’m afraid it’s evolution will be chaotic and unpredictable in terms of significant climate patterns and their possible repitition. Looking at the past and expecting to predict longer term evolution will be futile. Prediction of more storms, fewer storms, stronger storms, weaker storms as the CO2 pereturbation continues will be futile and diseingenuous to suggest that we can. We have cycles of El Nino and La Nina at present, we have no idea if that will continue or another regime will evolve (emerge) sooner or later and whether that will be good, bad or indifferent. We’ll have no idea if CO2 was a driver either. That’s chaos in it true mathematical/physical form. 9. Latitude says: Kip, essay was so good….I could only come up with one snarky……. 😀 Some maintain that because we are changing the composition of the atmosphere by adding various GHGs, mostly CO2, that the present and future, on a centennial scale, are unique and therefore the past will not inform us. This is trivially true…….no, it’s just an excuse for not being able to predict anything • Doonman says: The fact of the matter is that all moments in time in this universe are unique. If they weren’t, then the universe could not be expanding. • Kip Hansen says: Doonman ==> If it is expanding, which is currently being contested.. • Crispin in Waterloo but really in Bishkek says: I understand that the challenge to the expansion of the universe is the notion that it’s expansion is accelerating. • Kip Hansen says: I think you’re right … just a point about how seemingly finished science sometimes falls apart after more research. • MarkW says: Kip, I thought it was whether or not the rate of expansion was accelerating, that was being contested. Not expansion itself. • Kip Hansen says: Mark ==> You’re probably right– just an example of how things that are sometimes decided and done, are re-visited and come up different. 10. Kip, if I understand the argument, some of the apparent patterns in weather might have no “cause” other than the chaotic workings of the system. So solar variability might not be a direct cause of things like the Little Ice Age. • Kip Hansen says: Tom Halla ==> The first bit is true — apparent patterns appear spontaneously in chaotic regimes. The second bit on solar variability or solar cycles is possible, but not implied by Chaos. Solar input is a major parameter of the climate system, and changes in it could have out sized (non-proportional) effects. • Paul of Alexandria says: One question regarding the solar effects: What can one say about chaotic systems and repetitive input signals? I recall some work on encrypted communications through synchronized chaotic systems where there are two chaotic systems which provide the encryption and decryption keys, respectively. By applying the appropriate (and secret) signal to both chaotic systems, they both follow the same path, thus providing a coherent key. https://www.hindawi.com/journals/mpe/2014/782629/ http://www.igi-global.com/chapter/encryption-analog-digital-signals-through/43310 • Kip Hansen says: Paul ==> Boy, that’s deep! But there are two keys to understanding this — chaotic systems are totally deterministic. If a set formula is run multiple times with exactly the same starting point and parameters, the outputs will be identical. Therefor, they use synchronized chaotic keys to encrypt and decode — and through a subtractive process remove the identical chaotic signals, leaving only the added-on information. In reality, what they do is much more complicated, but that’s the basics. 11. Leo Smith says: Mmm. You are right to say that we can’t predict the future of chaotic systems now, but that doesn’t mean we can’t one day, or that we can’t predict SOMETHING about them. Even if its as trivial as ‘the worlds icecaps wont melt in the next 500 years’. Also we may find that although climate is chaotic there are strong negative feedbacks that constrain it to a fairly small area of climate response. • Kip Hansen says: Leo ==> We will not be able to do numeric long-term climate prediction, not as this is commonly understood, and no amount of additional computing power or more data on a finer scale will change this. Non-linear dynamics (chaos) simply makes it impossible. We can, of course, come to better understandings about the Earth’s climate, its repeating patterns and associations between those patterns — and thus improve weather prediction and maybe even near-present (next month, next season, the coming winter) general predictions. The fact that the climate system is constrained to a fairly narrow range helps in enabling general prediction out a bit in time — but not long-term, as the profound sensitivity to initial conditions, as shown in the 30 Earths paper, makes each prediction different, not only quantitatively, but qualitatively, after even short time periods. • Paul of Alexandria says: Most chaos simulations assume a reasonably steady-state set of inputs and internal conditions, the object being to demonstrate sensitivity to small initial changes in these. However, remember that these are deterministic systems! In any physical system one can determine overall response to a large, or periodic, perturbation, and it is not unreasonable that our climate would respond in a significant way to such events. Exact prediction is, of course, impossible, as is long term prediction, but one can predict a general path for quite a ways out. Look at the Lorentz picture: in general, you can follow a part of the track for quite a ways; there are relatively few places where it does sharp bends and goes off in a completely new direction. • Kip Hansen says: Paul of Alexandria ==> Remembering that the Lorenz Attractor does not represent the climate in any real way. 12. My general comment is that , while “Climate Change” was the term used from the beginning of the IPCC , it was the assertion of catastrophic increase in our global mean temperature caused by changes in the spectra of the atmosphere , AlGoreWarming which is the trillion dollar destructive boogeyman . Climate is infinitely more complex than mean temperature which is totally determined by the energy balance over what would be called a control surface in Heat Transfer courses around the planet and atmosphere . The issue of mean temperature is more akin to Gas Laws than understanding the internal chaos . The internal eddies are , as evident here , a far more complex issue , but have only minor influences on the radiative flows thru that control surface . There is no spectral phenomenon which can explain ( which means quantitative equations ) why the bottoms of atmospheres are hotter than that calculated for the control surface around the lumped planet+atmosphere . So that is a separate issue too . • Kip Hansen says: Bob Armstrong ==> It is an oddity that the Climate Science world has focused on a single metric as if it were the be-all and end-all of things climatic. The metric they use, a bizarre blending of near-surface air temperature averages and sea surface water temperatures averages, is non-physical and, in many senses, nonsensical. It only tells us what the current situation is with retained energy manifesting at that moment as sensible heat. • You look at politics in general tho , and the mental level reduces to the mode . And reducing the issue to a scalar reduces it to the 1 dimensionality of Left-Right politics . I have never seen any quantitative functional unfolding of that scalar mean to the continuous climatic dance . 13. Bob Weber says: Climate is not chaotic; the only chaos is in the thinking processes of most all climate ‘scientists’, who have unnecessarily over-complicated the entire climate field due to a complete over reliance on the idea of feedbacks, which is due to the huge gaping hole in their understanding of solar variability effects. The only climate ‘attractor’ is TSI/insolation. • JohnKnight says: Bob, The term ‘chaos’ was incorrectly (and asininely) applied by Mr. Lorenz, it seems blatantly obvious to me . .. but hey, geeks ; ) It seems to be a lot like what goes on in a Pachinko machine . . the balls are gonna end up at the bottom in one or another of the slots . . but not being able to predict which one makes it “chaotic” in geekville ; ) 14. A true attractor story concerning North America’s largest heavy truck assembly plant, published in Journal of Strategy in 1999 as ‘A new productivity paradigm’. We built a nonlinear dynamic model of the plant using a modelling tool, STELLA, developed at Dartmouth. This was done in order to predict improvement impacts of various operational effectiveness stratagems. One thing the model did was predict a sharp fall in quality (evidenced by trucks needing rework after the end of the assembly line) if the ratio of special option orders to standard option orders crossed a threshold about 50-50. The plant ordinarily operated in high special order (60-70%) ‘chaos’ with lots of rework, all quite expensive. So we cooked up a four month experiment, giving dealers significant price incentives (equal to rework costs) to standard order. Sure enough, rework fell close to zero as special orders fell below the threshold. And at the end of the trial, rework shot up as special orders increased past the threshold. Like a toggle switch two lobed attractor. The strategic answer was a permanent change in pricing policy. Specials were repriced up, standards were repriced down. Was the very first peer reviewed paper applying nonlinear dynamics ‘chaos’ theory to manufacturing. Climate is a heck of a lot more complicated and interactive than heavy truck assembly. The climate models haven’t a chance. Arguing they deal with boundary conditions rather than initial conditions isn’t right as this post shows. CMIP5 gets two boundary conditions significantly wrong. There is no modeled tropical troposphere hotspot. And modeled ECS is ~twice observed. • Kip Hansen says: ristvan ==> Thanks for checking in and contributing. Great story about the truck plant. 15. The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible True but when external outside sources such as the sun change enough they are going to have a climatic impact , which is happening now as the sun goes into a prolonged minimum state of activity. Cooler global temperatures will be forthcoming. 16. whiten says: The only problem I personally have with Chaos Theory and climate is that climate is not a dynamic system, it is not weather, not even weather in long term. In principal climate is not even a system, it is an atmospheric configuration of a long term atmospheric system process, which is not driven by “random” initial conditions……….It is static……….not dynamic like weather Anywhere I look at the data I see no any initial condition responsible or required to be,. The periodicity and climate cycling supports this even at the point when considering so much error and misinterpretation of data. But whatever cheers • Paul of Alexandria says: Not correct. It is a system in that it follows known laws of physics (Boyles, Newtons, etc) and one can apply these laws to the current state and inputs to determine the next state (one second from now, one day, one year, etc). The problem with a chaotic system is not that you cannot predict future states, but that you have to have an amount of information approaching infinity to do so accurately into the future. • Whitten, all nonlinear dynamic systems produce mathematical chaos. That does not mean they are always chaotic, as my truck assemply plant example shows. A nonlinear system is simply one with feedbacks. Water vapor, clouds, albedo… a dynamic system is one with time delays. Since climate feedbacks obviously do not act instantaneously, climate is by definition a nonlinear dynamic system, therefore mathematically chaotic. Just as TAR said. Which means no one should rely on any climate model output. An inescapable fools errand. • whiten says: ristvan October 22, 2016 at 1:32 pm Hello ristvan. You see ristvan, there is a problem for me there. I think I understand your point made, hopefully, and I appreciate it, but you see there is a permanent condition there that I have problems with, which in my mind stands as a contradiction and a pardox ………the main thing of GHE, the permanent condition of the radiation imbalance being always positive even in its variation over time. it is a permanent condition, not an initial condition, not actually allowing for any other assumed random initial condition to default it, is permanent and static over time…………Even Al Gore knows this much…….. As I have said it before this is a result of trying to explain climate and climate change only by radiation physics………………and that is what I think a fools errand. cheers • Whitten, the no feedback GHE is always positive. Basic physics confirmed in the lab. Don’t confuse equilibrium with nonequilibrium conditions, or any of the,other silly stuff (gravity gradients) out there on the web. The main issue is Earths secondary feedbacks to that primary forcing. Now, that is a big uncertainty. Opinions very from high positive to negative. My own opinion after 6 years of reading many hundreds of papers and doing my own unpublished analysis is, positive but about half of modeled, ~1.6-1.7 rather than 3-3.2. So no CAGW. And because the likely Bode value is ~1.25-1.3, well behaved with no runaway. Unlike Monckton. But if you have been following, you know those arguments verbally and mathematically. Best single paper on observational sensitivity might be Lewis and Curry 2014. Regards. • Geromino Stilton says: That is incorrect. Only a very small fraction of nonlinear systems are chaotic in terms of the precise mathematical sense. And there is no evidence that the climate is chaotic. Summer is almost always warmer than winter and the average monthly temperatures do not vary significantly from decade to decade. The exception being when the earth enters or leaves an ice-age which can happen very quickly — suggesting that the climate is bi-stable rather than chaotic. Also the more general claim that you cannot make useful probabilistic statements about a chaotic system is also wrong. As it is stated in the essay the climate system while chaotic is bounded. Knowing those bounds would be extremely useful. Alternatively knowing that the temperature in a particular region will be between X and Y degrees 99% of the time would tell you what crops you could plant. Similarly know if the average rainfall would increase or decrease would tell you whether or not you needed bigger dams etc. 17. Paul of Alexandria says: “…an attractor is a set of numerical values toward which a system tends to evolve, for a wide variety of starting conditions of the system. System values that get close enough to the attractor values remain close even if slightly disturbed. It’s important to note that the values approach the attractor, but never (or seldom) actually equal it, or repeat themselves. If the values ever exactly repeat a previous value than the system – being deterministic – will simply repeat the same path again (and would probably not be truly chaotic). • Kip Hansen says: Paul of Alexandria ==> Take a look at the earlier parts in this series — a dynamical system, which is chaotic in some regimes, can be extremely persistently stubbornly stable, they can be periodic, and they can be chaotic within bounds. In the chaotic realms, some dynamical systems do get into repeating loops from which they do not emerge. Chaotic attractors are regions of phase space that the system can not escape, etc etc. But, yes, if a system has a single point attractor (for a particular set of parameters and initial values) then that exact system is in a stable, non-chaotic state. • Paul of Alexandria says: Thank you. It is also possible, I suppose, that for a given system some combinations of inputs and states are chaotic and some are not. 18. n.n says: It’s not just the climate system. Closer to home, a human life is a chaotic process, with a known source in the scientific domain: conception, and a known sink: death (however imperfectly described), while the states and transitions between conception and death are unpredictable except in limited frames of reference (i.e. scientific domain). • whiten says: DNA and RNA must be chaotic codes….:) • Louis says: Please try to be more considerate when posting images. A third of Americans are afraid of clowns. 🙂 19. O.64285 is the reciprocal of a fibonacci number . • Richard Baguley says: One divided by O.64285 is 1.55557. 1.55557 is NOT a Fibonacci number because Fibonacci numbers are integers. • I’m not sure what your point is, 1/0.64285 = 1.55557, not a Fibonacci number. phi, what the ratio of two Fibonacci numbers converges to, is 1.618 (hey, I remembered the equation) and of course, its reciprocal is 0.618. So you aren’t trying to describe that either. • Richard Baguley says: The point is: “O.64285 is the reciprocal of a fibonacci number.” is a false statement. . . You are confusing the golden ratio (1.618034) , which is the limit of the ratios of successive terms of the Fibonacci sequence. Ratios and reciprocals are two different things. • Splitting hairs over 0.05 ? I’m sure somewhere in the universe it matters. Good enough for climate science. 20. John Silver says: LOL, in Definitions you “forgot” to define climate. Of course,that was expected. 21. Trying to understand how the climate operates from an Earth bound perspective is like trying to understand how a gasoline engine works by observing if from within the combustion chamber. We can model the chaotic combustion process and if the model is detailed enough, we can even predict how much horsepower will be produced. We can also calculate how much horsepower will be produced by applying basic thermodynamic principles to the inputs and outputs and avoid the complications of chaos. The IPCC and consensus climate science makes the climate system seem far more complex than it really is and the reason is to provide the needed wiggle room to support what the physics can not. It’s the difference between calling the climate a chaotically coupled surface/atmosphere system and a causal thermodynamic system responding as a unit. The later is easier to quantify, but doesn’t get the result they need, so they invoke the former and bamboozle you with complexity to support the answer they want. The only long term attractors relative to the surface temperature are the requirement for equilibrium and the Stefan-Boltzmann LAW. The apparent chaos in the climate system is more about the transition from one state to another and not so much about what that end state will be, but this chaos is weather and not climate. Sure, there can be changes to the system that can make local variations in weather seem more extreme and even have a small effect on the average, but the global average climate (end state or the average temperature) is only a function of the total sunlight received modulated by slight variability in the average yearly albedo. Given a constant system and constant solar input, the only factor that can make a difference to the average is a change in the average albedo. Increasing CO2 slightly changes the system, but has little effect on how the system responds. • Bob Weber says: Sorry guy, but there is no “…constant system and constant solar input…” Otherwise I’ve always enjoyed your POV. • Bob, Varying only one variable at a time when quantifying a system response is best practices for system analysis. Even the IPCC does this with their metric of forcing and sensitivity which is equivalent to the final effect of one additional W/m^2 of post albedo solar input, keeping all else constant. We can also keep the solar input and albedo constant and vary the system by doubling CO2 which is effectively what is done when considering that doubling CO2 is EQUIVALENT to 3.7 W/m^2 of incremental post albedo solar input power (forcing per the IPCC definition). That is, both have the same ultimate effect as long as the other remains constant. The real climate system is certainly more complex, and the system itself is dependent on the state (the temperature), but we can quantify this by observing how the system changes between winter and summer and of course, CO2 emissions, concrete and particulate emissions all change the system too. There are certainly meta-stable states in the surface energy distribution, for example, El Nino, but they often have an offsetting state on the other side of equilibrium, for example, La Nina and these redistributions of energy have subtle effects on the dynamic state, but if you’re trying to model El Nino’s, La Nina’s and other meta-stable states, you’re not modeling the climate, but are modeling the weather. • n.n says: Ironically, the argument that the system is perfectly stable, is justification that CO2 is indeed evil. Given a perfectly stable system (i.e. operating withing a stable envelope), any perturbation can force catastrophic change through direct or cumulative effect. • n.n “Given a perfectly stable system (i.e. operating withing a stable envelope), any perturbation can force catastrophic change through direct or cumulative effect.” This is incorrect. A system that is stable and operating within a stable envelope for a very long time under the influence of a variable stimulus is so because any perturbation, large or small, does NOT cause a catastrophic effect. This can even be shown mathematically (Bode 1945). Note that there’s a difference between a time varying response (a changing system) and a time varying stimulus, although both can result in equivalent effects. Stability is an attribute of the system, but a time varying system is not necessarily unstable or even a requirement for instability. • tony mcleod says: This is perfectly correct. The climate has been relatively non-chaotic for 10,000 years or at least inhabiting a fairly narrow range as atmospheric and oceanic chemistry remain fairly unchanged. There has been nothing to nudge the climate in any particular direction, just a steady rise the Holocene optimum then a steady decline, give or take one or two minor perturbations. Now things are changing fast, extremely fast; the system may, repeat may, be getting a shove. Dumping aeons of stored carbon into the atmosphere in the geological blink of an eye is surely in the realm of a shove. Rising temperatures may, repeat may, be the start of the system flipping towards some new attractor. There is evidence suggesting the risk is non-zero. I get it that most reading this will disagree, that hasn’t seemed to have stopped the steady decline of Arctic ice nor glacier retreat – the two obvious early indicators. • Tony, You are surely jumping to conclusions here. On what basis do you think that CO2 is pushing the climate? If your authority comes from the IPCC’s conclusions, then I feel sorry for you as having been completely bamboozled by fear driven propaganda crafted for the political end of redistributive economics under the guise of climate reparations. Sure, CO2 is a GHG, but the physics quantifies the maximum effect incremental CO2 can have and its quite small at about half of the lower limit claimed by the IPCC and the ‘feedback’ they claim amplifies this tiny effect into something massive is based on a broken analysis that assumes a source of power other than the Sun and assumes that the relationship between forcing and temperature is linear, moreover; what they claim to be this small pre feedback result is actually the final result after all feedback, positive, negative known and unknown has had its effect. https://wattsupwiththat.com/2016/09/07/how-climate-feedback-is-fubar/ Nothing about contemporary temperature trends is unusual. The ice cores show that the RMS change in temperature of multi-decade averages (see EPICA DomeC data) is almost exactly what we are seeing today in the short term averages. The null hypothesis suggests that the warming we have seen in the last century is the highly predictable recovery from the LIA, especially when analyzed after the fact and considering that claims to the contrary are supported by nothing more than unsupportable rhetoric. • Alan Robertson says: Tony Mcleod, You just scored a “safety”… made an “own goal”. • catweazle666 says: tony mcleod: “the steady decline of Arctic ice nor glacier retreat “ Both of which are manifestations of cyclic phenomena, of course. And therefore, not causes for alarm. Unless whipping up fear – not necessarily knowingly – is the way you make your living… • tony mcleod says: I’m talking about changes to a potentially chaotic system and the possibility that changing the physical and chemical characteristics at such an extreme (geological) rate may have surprising effects. Just my opinion guys. Apparently the science isn’t settled so I think the possibility is real and that it’s prudent to consider. • Tony, “I’m talking about changes to a potentially chaotic system and the possibility that changing the physical and chemical characteristics at such an extreme (geological) rate may have surprising effects.” You’re worrying about an impossibility. The rate that the system is changing relative to CO2 concentrations is insignificant. It’s nothing more than a small, gradual increase to the baseline GHG effect which is otherwise modulated at a high rate and magnitude by the dynamic effects of water vapor and at any one time varies over a wide range across the planets surface. The climate system is perfectly adequate at handling arbitrary changes in GHG concentrations, or even a major disruption to the system as has happened many times before with volcanic eruptions and impact events. Precaution is an option when there’s the possibility of a real danger, but excess precaution in the form of expensive preemptive action must be framed in the context of acceptable risk and cost benefit analysis, especially when the perceived danger is speculative, the risk is demonstrably small, the claimed effects are theoretical, the mitigations are widely considered ineffective as many smart people dispute the danger, risks and effects on solid scientific grounds. • tony mcleod says: Right, so the science is settled? • tony, “Right, so the science is settled?” Well, the physical laws that tell us that the sensitivity is far less than claimed are immutable. One of these is the Stefan-Boltzmann LAW and I emphasize LAW, while CAGW is a speculative hypothesis. This law tells us that emissions are proportional to the temperature raised to the forth power, Conservation of Energy tells us that surface emissions must be offset by input power, otherwise the surface will cool. The slope of the SB curve at the average temperature of the planet is less than 0.2C per W/m^2 while the slope at the 255K emission temperature of the planet is about 0.3C per W/m^2 and these set the upper and lower bounds of the sensitivity which is well below the 0.8 +/- 0.4C per W/m^2 range claimed by the IPCC and the self serving consensus it crafted around its claims. Each of the 239 W/m^2 of incident power contributes to the 385 W/m^2 of average power emitted by the surface for a total contribution of 1.6 W/m^2 of surface emissions per W/m^2 of input forcing. If we add 1.6 W/m^2 to the 384.7 W/m^2 emissions at 287K and convert back to a temperature, the new temperature is about 287.3K (0.3K per W/m^2) which also sets an upper limit since owing to the T^4 relationship, the incremental sensitivity must be less than the average and the average is 1.6 W/m^2 of incremental surface emissions per W/m^2 of incremental power albedo solar input (forcing) corresponding to a sensitivity of only 0.3C per W/m^2. So to be sure, the sensitivity is not completely settled, but it is definitely between 0.2 and 0.3 C per W/m^2 and not between 0.4 and 1.2 C per W/m^2 and this much is unambiguously settled. • catweazle666 says: “Right, so the science is settled?” The “normal” science? Indeed it is, pretty much. As to the “Post-Normal” science, that’s a different matter altogether. • tony mcleod says: I appreciate your thoughtful reply co2isnotevil. It’s been a long time since I immersed myself that deeply in physics so I need a bit of time to digest it. At a cursory level am I mistaken thinking that Boltzman’s Law is more properly associated with ideal black bodies? If that is the case – that the Earth is not such a body – what considerations need to made? Is there a possibility that CO2 forcing, while insufficient by itself, may be enough to precipitate a rapid methane release which then could be the nudge the system needs to break out of it’s ‘stable’ state and abruptly shift to a new equilibrium? • Tony, “Boltzman’s Law is more properly associated with ideal black bodies” No. The Stefan-Boltzmann law relates emissions and temperature. A black body is just the degenerate ideal case of unit emissivity. The more general form is a gray body which characterizes a non ideal black body and which covers all possible emissions from a body or surface consequential to its temperature. From space, the Earth looks a lot like a gray body whose emissivity is about 0.61 when you consider its temperature to be the temperature of the surface. BTW, the surface itself (i.e. without the effects of the atmosphere) is a nearly ideal black body (after counting for reflection) and even Trenberth and most other warmist scientists concur. Adding an atmosphere makes this ideal BB surface look like a gray body from space and this is what consensus climate science incorrectly denies. The SB Law and COE were settled science long before the IPCC started to distort climate science where they have precipitated massive lies based on disinformation arising from the arrogant assertion that these basic physical laws are somehow not settled science. ” …precipitate a rapid methane release” No. This is one of those BS hypotheticals they put out there to scare people. In fact, methane only acts in a very narrow part of the emissions spectrum and contributes only a tiny amount to the total GHG effect, most of which comes from water vapor and CO2. Even ozone is a bigger contributed than CH4. The CH4 concentration is largely irrelevant as it affects so little of the emission spectrum. • MarkW says: tony, from whence comes your delusion that adding CO2 to the atmosphere is a forcing, much less a major one that threatens to destabilize climate? • MarkW says: “Right, so the science is settled?” When it comes to the fact that CO2 is a minor player in climate changes. Yes it is. • tony mcleod says: Again thankyou for your concise reply co2isnotevil. I guess I remain open-minded about all this. I see some of the reported effects like shrinking glaciers and diminishing sea-ice and I have to wonder to their cause. I understand that many here will say they are just natural cycles and that may be the case but the rate of these changes lead me to think there may be some anthropocentric factors. • Tony, ” … but the rate of these changes lead me to think there may be some anthropocentric factors.” What makes you think that there’s anything unusual about the current rate of change in ice? Surely this rate of change is far, far greater when entering or leaving ice ages. As the LIA was ending, the rate of ice advance was so fast in the Alps that monks were dispatched to slow it down. Those at the IPCC will have just as much luck trying to slow down the retreat especially since they’re relying on virtually the same methods. If you are worried about human intervention, you need to refocus your angst on people like Holdren and MacCracken who have delusions of climate control by geoengineering. 22. Bob Weber says: “The IPCC correctly states that “…the long-term prediction of future climate states is not possible.”” This is only true if we don’t know what the future states of solar activity will be into the indefinite future. Now, we might be on the threshold of knowing what solar activity will be a few cycles ahead, but that itself still remains to be seen. Regardless of whether we can confidently know about the magnitude and duration of future solar cycles, with my solar model, we can model what the climate response would be due to whatever future solar activity cycle scenarios we can dream up. 2016 is a year that came in hot from previously high TSI during the SC24 max, and is going out cold from the fairly rapid drop-off in TSI in this year. Is that concept in anyone else’s model? Doubtful. Rephrasing my earlier comment, there is no chaos to the sun’s weather and climate effect, only in most people’s understanding of it. That was not a dig at Kip etal. Thank you for writing an interesting article. • Kip Hansen says: Bob Weber ==> It is certainly true that the Sun is the energy source for the climate system, with some energy still being added from the Earth’s core (planetary origins). It is true that the Sun is “fairly” constant, but not absolutely constant, nor is the Earth’s orbit (physical relationship to the Sun) constant. Similarly, the Sun’s output is not only not absolutely constant, but the makeup of its radiation is not constant, with solar flares and solar storms throwing out scads of differing radiation and particles. There are many working on the Solar/Climate problem, seeking to understand what solar changes cause or contribute to what climate changes. It is an interesting field. But solar won’t answer what many want to know: “Will we be able to feed nine billion humans in the climate 50 years from now?” “Will changing climate cause a sudden jump in sea levels, flooding half of Bangladesh?” • “Will we be able to feed nine billion humans in the climate 50 years from now?” Probably no and this is independent of what the climate does, although there’s a better chance to feed a larger population if it gets warmer. Mankind will survive regardless of what the climate does and isn’t that all that really matters? • Kip Hansen says: co2isnotevil ==> If it were merely a matter of humanities humanity’s survival then we could quit fussing … but some people would like to see a good outcome for all people … would like to see an end to famines and suffering in Africa … One of the worst misconceptions about climate science is that it is all about warming. • “One of the worst misconceptions about climate science is that it is all about warming.” Climate science should be concerned about warming and cooling equally, but since the politics is only concerned about redistributive economics under the guise of climate reparations and only warming by industrialization can justify these ends, the means is to be concerned only about warming. • richard verney says: The answer to your two questions would appear obvious. First: Yes (whether it warms or cools). Second: No. 50 years is far too short a period for the oceans to significantly expand through warming, or rise through land based ice melt. The vast volume and heat capacity of the oceans is a great dampener to the system such that any significant change will be a slow process. • richard verney says: Perhaps I should expand a little on my reasoning in view of the comment by CO2isnotevil that crossed with mine. There is no such thing as global warming. Warming is regional, some parts of the globe are warming faster than others, and some parts are either not warming or perhaps cooling (data suggests USA and Greenland have both cooled since the 1940s). What does it matter if high northern latitudes particularly the Arctic warms somewhat? Further, it appears that the warming consists mainly of higher nighttime lows (ie., it is less cold at night), and Autumn starting a few days later and Spring coming a few days earlier. What’s not to like about that? With cheap energy we can do anything. There appears to be plenty of shale and coal so we have cheap energy for 100s of years, if only Governments would let us use it. There are vasts tracts of the arable land that is simply not being used. Heck we have so much food stuff that we ca afford to burn it (bio fuels) rather than eat it. Tons of food is thrown away or stock piled to keep prices up. it is no big deal globally if the grain/wheat belt migrates slightly. with genetic engineering we can modify crops whether it warms or cools (although I accept the point that CO2isnotevil makes that cooling is more problematic). One thing that history demonstrates is that man is extremely adaptable. If that was not the case, our ancestors could not about 70,000 years ago have left Africa and colonized the globe going right to the Arctic circle. We will meet the challenges of any change. • tony mcleod says: Ironically Richard, 70,000 years ago humans almost went extinct. Humans are part of the system not separate from it as many would have you believe. • catweazle666 says: “Ironically Richard, 70,000 years ago humans almost went extinct.” Is that so? Driving too many SUVs, were they? • tony mcleod says: “Driving too many SUVs, were they?” That probably wouldn’t have helped. 23. n.n says: Outside of the scientific domain, there is no chaos, only perfect characterization and modeling. Let’s hope the scientific domain remains so perfectly known and predictable, despite observable and reproducible evidence to the contrary. 24. Bob they do not want to accept the fact that it is the sun that governs and determines the climate of the earth. • Bob Weber says: Salvatore this will change. C/AGW is already terminated – it’s just that very few know it yet. If I believe anything, it is that the extended scientific community is vastly under-informed on this subject, and that this audience will respond to proper persuasion that includes solid theory and evidence. The earth is so super sensitive to TSI that it’s short-term to long-term influence can be readily seen with the right information. Even if I didn’t say another word, I am confident that due to the rapid solar cooling we are undergoing now, one by one doubters will come around, and no later than one year from now most of the skeptics will be on board, with 95% of the stragglers coming in after they see first hand the effect of the upcoming cycle minimum. The very last few will hold out until they see the TSI driven ENSOs that will occur at the onset of SC25 and after the peak of SC25, confirming the timing and pattern of previous solar cycle driven ENSOs. By the end of the next solar cycle maximum it will be understood worldwide. The warmists will have no where to run, no where to hide. Technically it is already over for them, whether they know it or not, or whether they’ll ever believe it. Weather and climate operate on extremely simple rules based on solar activity and insolation, not CO2, and it is most definitely not chaotic on a gross level. I look forward to the day when we will be discussing the entire topic in much greater detail. • By the end of the next solar cycle maximum it will be understood worldwide That is not a given, considering that the next cycle will be no weaker [and probably a bit stronger] than the current cycle. • Bob Weber says: LS I appreciate your insight on this subject. However you are not fully up to speed on what I’m saying, due to no fault of yours. The climate response to whatever the sun throws at us in the next cycle can be understood in context of the response to previous cycles, and if SC25 is stronger, the solar climate signal will be that much clearer. I am confident that even you will be persuaded by my research. As you know, there is nothing new under the sun…;) • tony mcleod says: Mmm, no, of course they don’t Salvatore. Do you accept the fact that the atmosphere has any affect? 25. jmorpuss says: Science itself has had a pretty chaotic past https://en.wikipedia.org/wiki/Science and the biggest attractor in modern times is the almighty . Dollars give you power = might is right ” those who are powerful can do what they wish unchallenged, even if their action is in fact unjustified.” • Johann Wundersamer says: Kip Hansen, thanks for 4 parts of ‘The climate system is a coupled non-linear chaotic system, and’ – only missed ‘and that coupled non-linear chaotic system is self-regulating. ‘ Best regards – Hans 26. Johann Wundersamer says: Kippen Hansen, let’s make it short: it’s not just the problem with computer models; it’s with the conditions of the real world. Every new test run with the real existing world, starting March 17, 2016 at 10:08 am, produces a completely different November 25, 2016 at 04:32 pm. Cause that’s how the real existing world runs. • Kip Hansen says: Johann Wundersamer ==> Yes, the real world weather/climate system is chaotic [as in Chaos Theory] … that’s a very important point and I hope that readers who have slogged all four parts of this series have come to realize that. 27. Johann Wundersamer says: In the terms of this blog: The null hypothesis of Laplace’s Demon is – regardless the conditions on March 08, 2016 – there’s ALWAYS a completely different April 12, 2021. Cheers 28. THE CLIMATE MODELS ARE USELESS – they are useless because they do not factor in the initial state of the climate correctly they ignore the strength of earth’s magnetic field which moderates solar activity which the models have no clue on how to account for, especially the secondary factors that effect the climate due to solar variations much less the solar variations themselves. They are useless and I have more confidence in my climate outlook then any worthless climate model may predict. 29. I am sure every one agrees that if solar changes are extreme enough there would be a point where a solar/climate relationship would be obvious. The question is what does the solar change have to be in order to be extreme enough to show an obvious solar/climate relationship? Again I have listed the solar parameters which I think satisfy this issue. I have put forth those solar parameters /duration of time which I feel are needed to impact the climate and I think going forward the solar parameters I have put forth will come to be which will then manifest itself in the climate system by causing it to cool. I dare say I think it has started already. How cool it is hard to say because there are climatic thresholds out there which if the terrestrial items driven by solar changes should reach could cause a much more dramatic climatic impact. Terrestrial Items atmospheric circulation patterns volcanic activity global cloud coverage global snow coverage global sea surface temperatures global sea ice coverage ENSO a factor within the overall global sea surface temperature changes. Solar Parameters Needed and Sustained. cosmic ray count 6500 or greater solar wind speed 350 km/sec or less euv light 100 units or less. solar irradiance off by .15% or more ap index 5 or lower Interplanetary Magnetic Field 4.5 nt or lower Solar Flux 90 or lower Duration of time over 1 year following at least 10 years of sub solar activity in general which we have had going back to year 2005. We should know within a year as prolonged minimum solar conditions become entrenched. • tony mcleod says: “manifest itself in the climate system by causing it to cool. I dare say I think it has started already.” Actually. no Salvatore. Are there any graphs here https://wattsupwiththat.com/global-temperature/ that would support that position? • For what it is worth, I already use solar parameters in making multi year climate forecasts over my region in the Upper Rio Grande. The exercises demonstrate high accuracy. I have also engaged in additional work correlating and employing spectral signatures (time series spectra) relating to the Sun and my subject steams and rivers. You don’t have to wait a year to see this. My site contains numerous examples and of course I’m working towards publications, with a precursor that touches on this peripherally at: 30. And then there's biology says: One minor quibble. The increase in the amount of CO2 in the atmosphere has made the planet greener due to increased efficiency of photosynthesis… biology. Increased plant life has an impact on climate. So biology has an impact on climate…just like physics does. • Kip Hansen says: And then there’s biology ==> Yes, true, and that photosynthesis captures and stores some of the energy retained by GHGs. 31. >“The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible.” Let’s complete <a href="https://www.ipcc.ch/ipccreports/tar/wg1/501.htm"the thought, shall we, Kip? The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible. Rather the focus must be upon the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. Addressing adequately the statistical nature of climate is computationally intensive and requires the application of new methods of model diagnosis, but such statistical information is essential. I get it that you don’t understand that we only <a href="https://judithcurry.com/2016/10/05/lorenz-validated/#comment-815456"get one realization of actual events to work from, but your inability to “get it” is a poor excuse to continue misrepresenting what the IPCC has to say about the very real challenges involved in figuring out what a complex non-linear chaotic system *might* be expected to do in response to relatively abrupt changes in external forcing. • Alan Robertson says: Typical. • catweazle666 says: Hehehe! Still sleeping on that rubber sheet and telling stories to scare the children, Brandon? • Still beating your wife with non sequiturs, catweazle666? 32. Kip, I’ll repeat my complaint above that in this series, while you have talked a lot about chaos and attractors and drawn trajectory plots, I don’t believe you have ever plotted an attractor, or said anything quantitative about them. They are important, since they are what takes the randomness out of chaos. And they are the analogue of climate in chaotic weather. Here is a plot from Wiki: https://en.wikipedia.org/wiki/Lorenz_system#/media/File:Intermittent_Lorenz_Attractor_-_Chaoscope.jpg It is the “set of numerical values toward which a system tends to evolve” (your initial quote). And it has shape and topology. You have to determine it from trajectories, and this is a process analogous to averaging an ensemble. The equation for that system is: https://s3-us-west-1.amazonaws.com/www.moyhu.org/2016/10/but.png with ρ=28, σ=10, β=8/3. Note again that the system is autonomous – time does not appear on te right hand side. That means that you can generate trajectories forever for the same attractors. For GCMs that isn’t true. But what is important is that while the trajectories change radically for small change in initial conditions, the shape of the attractor changes continuously with the parameters. This is analogous to the dependence of climate on forcing. Incidentally proving that was #14 of Steve Samle’s problems, solved in 2002(but numerically generally found to be true earlier). It is tracking the slow variation of that attractor (climate) with forcing that is the essential GCM climate problem. It has nothing to do with the initial value issues people are hung up on here. I will write up something on this on my blog. • Kip Hansen says: Nick ==> The best evidence against your point is given by Milanovic at Judy’s recently. There is no doubt, however, that the climate operates inside of some sort of space that has the appearance of an attractor (the historical climate record) and that inside that space we find all the attributes of chaotic systems in general, both in physical space and time. It is possible, because of this, that in time we might be able to recognize repeating patterns of “fractal-like” behavior inside that space, and work up some sort of probabilities in a very general way. That is light-years away from useful long-term climate prediction or projection by numeric climate models. • Kip Hansen says: Nick ==> we might actually be able to agree on the last two paragraphs above….yes? • Here’s a plot of the surface gain on the Y axis (surface emissions / total solar forcing) vs. surface emissions along the bottom. Sure looks a lot like the behavior of an attractor, albeit it a trivial one with only one destination, which is a ratio of about 1.6 for the surface gain or 1.6 W/m^2 of surface emissions per W/m^2 of forcing. Each dot is 1 month of data for a 2.5 degree slice of latitude of the planet extracted from the ISCCP cloud data set covering about 3 decades of weather satellite data. The larger dots represent the average over the entire sample period for each slice. • To what can the two lobes be analogous? Ice ages? ENSO? Droughts? All? How long does the climate take to span its phase space? Certainly a ‘stable’ few thousand years could be oscillations about islands of stability. Why are models initialized to closely match current conditions? Won’t the attractor reveal itself regardless of initial conditions? Do modellers expect to determine the attractor and tease out sensitivity in only 100 years of t? • “To what can the two lobes be analogous?” The two ears case is just for one set of parameter values. chosen presumably for appearance. I don’t think there is a climate analogy for this shape. “How long does the climate take to span its phase space?” As I said, climate isn’t autonomous, so this isn’t very meaningful. By the time of “spanning”, conditions have changed. “Why are models initialized to closely match current conditions? Won’t the attractor reveal itself regardless of initial conditions?” They aren’t, and yes. Models are usually started “wound back” – to maybe a century or more ago. The idea is that it is better to let the less well known early initial state settle down than to use recent data which may, through lack of resolution or inaccuracy, be far from the attractor. “Do modellers expect to determine the attractor and tease out sensitivity in only 100 years of t?” Good question. Again it comes back to non-autonomous relations. They are trying to observe a moving attractor. One compromise is to look for TCR (transient) measured over 70 years. But that may vary with time. • “I don’t think there is a climate analogy for this shape.” This is the general shape for a solution space with pair of quasi stable states, where the system is stable in either given the same stimulus, but can be easily pushed one way or another by orthogonal factors. El Nino/La Nina is an example and there are many others. The composite shape corresponding to the actual Earth climate system response is the sum of a lot of smaller shapes with 2 or more lobes which when combined provide a solution space for the background ‘noise’ centered around a steady state dictated by COE requirements. The take away should be that all this chaos is nothing but weather and that weather is not the climate. Ice ages and interglacials are not an example of quasi stable states with the same stimulus, as the stimulus is a function of orbital characteristics with asymmetry between hemispheres and given the characteristics as compared to other similar times, global scale glaciation is not sustainable and we should either be in an interglacial period or transitioning into one. The chaos is around state pairs that are much closer together. On the ice age side of the climate, there is an extenuating circumstance that makes ice ages deeper, which is increased reflection from increased surface ice and snow, however; we are relatively close to minimum possible average ice already and this albedo effect can only enhance future cooling but lacks the dynamic range to have much effect on future warming. • catweazle666 says: “To what can the two lobes be analogous?” Ice ages. http://www.biocab.org/Geological_Timescale.jpg Looks like two big lobes and a fair bit of noise to me. And not a sniff of a relationship between temperature and CO2 in sight. Oh, and it seems inevitable it is going to warm up quite a bit sooner or later, whether we want it to or not. • “many lobes over many time scales.” Ice ages and interglacials are not 2 stable solutions given the same stimulus and do not fit this pattern. Ice ages and interglacials are unambiguously related to changes in the Earth’s orbit and axis. This is not chaotic noise, but a causal response to a quantifiable change. The solution space certainly has many lobes over many time scales, but the lobes are close together (i.e just on either side of balance) and the time scales are short since the climate systems time constant is only on the order of a year. If it was the decades to centuries claimed by the IPCC, we would not even notice seasonal change since the response would be too slow. • tony mcleod says: So, let’s dump 30Gt into the air and see what happens. Where’s my popcorn. • “Can’t a non-autonomous system be converted to autonomous one?” Not usually. Non-autonomous means the equations (coefficients) change with time. To find an autonomous set which admitted the same solutions would be extreme good fortune. • Kip Hansen says: Nick ==> While I disagree with your point, I look forward to seeing your post on it. 33. jmorpuss says: “Coulomb’s law or Coulomb’s inverse-square law, is a law of physics that describes force interacting between static electrically charged particles. In its scalar form the law is: F = k e q 1 q 2 r 2 {\displaystyle F=k_{e}{\frac {q_{1}q_{2}}{r^{2}}}} {\displaystyle F=k_{e}{\frac {q_{1}q_{2}}{r^{2}}}}, where ke is Coulomb’s constant (ke = 8.99×109 N m2 C−2), q1 and q2 are the signed magnitudes of the charges, and the scalar r is the distance between the charges. The force of interaction between the charges is attractive if the charges have opposite signs (i.e. F is negative) and repulsive if like-signed (i.e. F is positive). The law was first published in 1784 by French physicist Charles Augustin de Coulomb and was essential to the development of the theory of electromagnetism. It is analogous to Isaac Newton’s inverse-square law of universal gravitation. Coulomb’s law can be used to derive Gauss’s law, and vice versa. The law has been tested extensively, and all observations have upheld the law’s principle.” https://en.wikipedia.org/wiki/Coulomb%27s_law Forcing something will produce a resistance and the result = Heating . The harder the electron has to work to hold the molecule together the hotter it gets . 34. Kip This is a great article on an important subject, but careful though is needed as to where chaos-nonlinearity actually move the climate debate. To say “climate is chaotic so can’t be predicted” is an exaggeration and provides alarmists with a straw man to burn and a pretext to crow to one another that they have seen off again the threat of chaos to their orderly and simplistic doom architecture. Among all the details of chaos theory, the most important message of chaos should not be lost. This is that chaotic-nonlinear dynamics, together with the vast ocean heat content and its sharp temperature gradients – especially vertical – that climate changes itself by internal chaotic dynamics. Talk of climate change as always requiring external forcing exposes profound ignorance of chaotic dynamics, or alternatively, denial of chaos. It is not correct that extreme sensitivity to initial conditions is a sufficient condition for chaos. Non chaotic systems also display such extreme sensitivity. There are further conditions that are needed for chaotic-nonlinear dynamics to emerge. Some of these are listed below although I can’t say which of these are either necessary or sufficient: – A dissipative system with open flow through of energy – Negative feedback, also called friction or damping – Positive feedback, also referred to as excitability or reactivity; interaction between positive and negative feedbacks can drive chaotic dynamics – A negative Lyapunov exponent, often associated with dissipative damped systems, makes outcomes converge to an attractor. – Degrees of freedom in a number of parameters that provide the dimensions of a phase space within which a negative Lyapunov exponent and chaotic attractors can emerge. • Kip Hansen says: ptolemy2 ==> This quote: “To say “climate is chaotic so can’t be predicted” is a doppleganger of the real quote which is: “… therefore the long-term prediction of future climate states is not possible.” And this conclusion was not drawn from “sensitivity to initial conditions” alone but from the expanded study of dynamical systems in general. Lorenz’s computer bug turned out to be “initial conditions” and passing his paper around got a lot of people excited and looking into different aspects of what later was named Chaos Theory. All your points are valid of course, they are part of the field. Thank you for bringing them up. 35. Many thanks for these posts on Chaos. Chaos as used in this post appears to adopt a standard but typically unspoken assumption that one knows all of the variables at play very well, even as strange attractors emerge from the repeated numerical experiments. That’s easy to see for any who work with nonlinear dynamics and the analytical and numerical implementations thereof. But what if all of the variables and/or mechanisms are not truly known? Then the concern is not really about chaos but rather about epistemic uncertainty. In other words, how do we really know what we don’t know? One way to advance is to consider alternative conceptual models, and run exercises for those. Then it would prudent to compare the forecasts to the data.. and also compare that validation exercise to the prior conceptual models and their predictive offspring, and see which model does a better job. It doesn’t necessarily solve everything, but if a better model is found, perhaps the chaos argument becomes somewhat more moot. This is the basis for my own successful excercises which I believe demonstrate an ability to forecast drought and pluvials in some regions, many years in advance. This is done without reliance upon numerical determinstic models such as the serially-reinitialized GCMs. The proof is here: http://www.abeqas.com/mwa-demonstrates-proven-drought-forecasting-a-possible-first-in-the-climate-industry/ I love chaotic topics and am sure they will never go away in key aspects of climate science. But in this case, given my reproducible experiences, they may not be the true obstacle. • Kip Hansen says: Mike ==> A lot of attention is being turned to the “uncertainty” issue in Climate Science — Judith Curry has been discussing it for some time. And yes, uncertainty is what we know, what we don’t know, what we don’t know we don’t know, …. In this essay today I point out that one of the things we don’t know is if the climate system has an attractor?, and if so, what does it look like? I do offer my best guess that the historic climate record reveals at least the limits of the attractor-like behavior of the climate system. • Perhaps it is much simpler than that. Climate scientists and many others declare that since the TSI changes only fraction of percentage point that the sun can not be a principal driver of climate change. Climate change as seen through the global temperatures periscope is result of a finely balanced system, which can be disproportionally thrown off from its natural tendency towards equilibrium even by the smallest of changes. http://www.vukcevic.talktalk.net/CSb.gif • jmorpuss says: The biggest attractor re: climate change is Earth (Ground) Half of atmospheric heat is a process of the resistance build up between Earth and Sun.http://physicsworld.com/cws/article/news/2011/jul/19/radioactive-decay-accounts-for-half-of-earths-heat If you stand back and look at Earth as a complex molecule and like all molecules , it’s surrounded by a cloud of electrons . Earth surface is like a giant Van de Graaf generator . The spark that is produced between ground and atmosphere we call lightning, even if you don’t see a spark the exchange of energy still takes place. A low pressure system works in the up direction and can produce foul weather and a high works in the down direction good weather . https://en.wikipedia.org/wiki/Van_de_Graaff_generator • StephanF says: There are knows, then there are known unknowns and then are unknown unknowns… • StephanF says: … and then there are unknown unknowns… 36. George Steiner says: In the real world the climate change caravan moves on while the skeptic dogs bark. In the town where I live the mayor forbade the use of plastic shopping bags. Protesters stop pipeline construction. The federal government will introduce a carbon tax. I hope you all are having a good time. The left of course is driving the camels. 37. Barbara Hamrick says: I was intrigued by the NCAR/UCAR images, but it seemed to me that comparing the ensemble mean (EM) to the observations cannot produce a valid measure of natural vs. man-made contributions. I assume what they’re saying (although I’m grossly over-simplifying) is basically, if you look at the observations, and “subtract” off the EM, you get humanity’s contribution (or, they somehow “parse out” the contribution). But, the reality is the climate isn’t “an average” of all possible climates, so if the natural state were actually image 24, then we contributed much less (if any at all) to the warmth. Am I missing something here? P.S. I have long understood that the climate is a“coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible,” but astounded that the IPCC seems to have forgotten that. • Kip Hansen says: Barbara Hamrick ==> You have it exactly right — the ensemble mean of 30 chaotic outputs is only the mean of 30 chaotic outputs — nothing more — nothing less. I wrote an entire post on this at Climate Etc titled Lorenz Validated. You’ve pretty much nailed the central point. Thanks for reading and commenting here. • > Am I missing something here? That we only have one realization of the actual system from which to work might be a good candidate, Barbara. > P.S. I have long understood that the climate is a“coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible,” but astounded that the IPCC seems to have forgotten that. They haven’t. Kip likes to omit the rest of the paragraph in that particular quotemine: Rather the focus must be upon the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. Addressing adequately the statistical nature of climate is computationally intensive and requires the application of new methods of model diagnosis, but such statistical information is essential. He then likes to “pretend” that he doesn’t understand how Stoopid Modulz Ensemblez can be useful to obtain that stated stastical goal, even as he gives the concept some lip-service. Here’s Kip’s end-game: That is light-years away from useful long-term climate prediction or projection by numeric climate models. Quite convenient that searching an effectively infinite state space is required before being able to make useful policy decisions, innit. 38. Kip Hansen says: Barbara Hamrick ==> You have it exactly right — the ensemble mean of 30 chaotic outputs is only the mean of 30 chaotic outputs — nothing more — nothing less. I wrote an entire post on this at Climate Etc titled Lorenz Validated. You’ve pretty much nailed the central point. Thanks for reading and commenting here. 39. Paul Blase says: Here’s an interesting article on Chaotic Circuits Can Mimic Brain Function, Aid Computing. The authors show …one can realize a ring network, wherein each of the 30 nodes is a single-transistor chaotic oscillator comprising only 5 discrete components, and is resistively coupled to its neighbours (Fig. 1, Fig. 2). The circuits can be tuned to oscillate chaotically, in other words, to retain deterministic dynamics but operate in such manner than small fluctuations are rapidly amplified in time. What is particularly interesting here is how, in a ring of oscillators running in a chaotic fashion, if the oscillators are coupled with intermediate strength, they spontaneously form communities of units that preferentially synchronize with one another. In other words, very small signals through the intermediate oscillators synchronize much larger signals in separated units. This could explain, for instance, the apparent observed synchronization between planetary alignment and solar activity. A “driving” action is not necessary, simply the kind of chaotic synchronization mentioned in the paper. 40. Kip Hansen says: Epilogue: My thanks to all of you who have read this and the three earlier parts of this series — and to those who have joined into the conversation here in the Comments Section. A lot of good insights and interesting questions. The four parts here and the one post at Climate Etc. complete this series. I am hoping that some of the commenters here who have expressed alternate understanding will work up substancial essays outlining their views, either here or on some other blog — I look forward to reading them. (Authors can email me at my first name at the domain i4 decimal net to make sure I see their efforts.) Those with pressing questions left unanswered can email as above. 41. Kip Hansen: “We have absolutely (literally absolutely) no idea what the precise, or even an,  attractor for the weather or climate system might look like, separate from the long-term historic climate record.” Here: https://chaosaccounting.files.wordpress.com/2015/01/greenland-basin-of-attraction.png I’ve tried explain a physical basin of attraction. There’s a lot of water. Some of it is strongly attracted to Greenland but most of it on any give day is not. Sea ice could also be a basin of attraction as would be the sea water near it. Humidity level changes can be thought of as basins. The fact that water is so important to climate coincides with its ability to change form as with a bifurcation diagram as well as its information carrying ability and memory. Another example is a lake in Minnesota. In fall the evaporation rate is high until it comes to about a full stop when it ices over. The Winter basin of attraction is don’t evaporate. The Summer one is to.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.653705358505249, "perplexity": 1469.529284097511}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671053.31/warc/CC-MAIN-20191121231600-20191122015600-00164.warc.gz"}
https://www.physicsforums.com/threads/how-fast-could-the-person-have-to-travel.166385/
How fast could the person have to travel? 1. Apr 18, 2007 jenita a person whose mass is 48 kg wishes to gain 12 kg relativistically with respect to another reference frame. How fast could the person have to travel????? the formula is as follows.... i Think M is 48 kg mo is 12 kg v=?????? can anyone help me to do this problem Attached Files: • untitled.JPG File size: 1.7 KB Views: 94 2. Apr 18, 2007 hage567 Mo will be 48 kg. M will not be 12 kg, since the question states that the person wishes to gain 12 kg (so the mass will increase by this much). I can't see the picture yet. Does it show where you are getting stuck? 3. Apr 18, 2007 jenita sorry i dont no why the pic is not showing up...its the formula in the pic.... i hope u no the formula.....if mo is 48 then what is 12kg 4. Apr 18, 2007 jenita is 12 kg M or do we have to find M first 5. Apr 18, 2007 jenita oh ya by the way c is 3.8 * 10^8 6. Apr 18, 2007 jenita 7. Apr 18, 2007 hage567 Is this the equation you have? $$M = \frac{M_o}{\sqrt{1 - v^2/c^2}}$$ You want to solve for v. Don't worry about the actual value of c (you've got it wrong by the way, it's 3x10^8 m/s), just leave it as "c" in your work. At the end, you can worry about converting it. You are given Mo. You can figure out M because you know the gain in mass is 12 kg. So, what is M, then? Last edited: Apr 18, 2007 8. Apr 18, 2007 jenita but to find M dont u need V 9. Apr 18, 2007 hage567 10. Apr 18, 2007 jenita how can i solve it if i dont have M, v, and c...dont i need to have 1 11. Apr 18, 2007 hage567 You are trying to solve for v. You must rearrange that equation to get v in terms of the masses. It IS the equation you are using, right? You have the information to get both masses without solving anything! 12. Apr 18, 2007 jenita 75000000...i got this by inserting 12 for M, 48 for mo and 3.08*10^8 for c.... i divided 12 from 48 which i got.25 and the squared it and got.0625.... i squared 1 and c and then multiplied by c and 1 to the .0625...... i was left with 5.625e15=v^2 and i squared it and got the answer..... sorry it looks confusing 13. Apr 18, 2007 jenita and yes that is the equation we are using 14. Apr 18, 2007 hage567 Like I've already said, M is not 12 kg. 12 kg is the change in the mass. So if the mass started at 48 kg and 12 kg is added relativistically, what is the new mass (M)? 15. Apr 18, 2007 jenita oh lol....its 60 kg 16. Apr 18, 2007 jenita so u insert 60 for m 17. Apr 18, 2007 jenita so for M u insert 60 and then solve 18. Apr 18, 2007 hage567 There you go! Let's see what answer you get now. 19. Apr 18, 2007 jenita i got 375000000...i did the same way like i did to get 75000000 20. Apr 18, 2007 hage567 Well that can't be right since that is faster than the speed of light. Are you using 3 x 10^8 m/s for c? Similar Discussions: How fast could the person have to travel?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9367046356201172, "perplexity": 2014.4263331965633}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805362.48/warc/CC-MAIN-20171119042717-20171119062717-00533.warc.gz"}
http://kth.diva-portal.org/smash/resultList.jsf?af=%5B%5D&aq=%5B%5B%7B%22organisationId%22%3A%226119%22%7D%5D%5D&aqe=%5B%5D&aq2=%5B%5B%5D%5D&language=en&query=
Change search Refine search result 1234567 1 - 50 of 2356 Cite Citation style • apa • harvard1 • ieee • modern-language-association-8th-edition • vancouver • Other style More styles Language • de-DE • en-GB • en-US • fi-FI • nn-NO • nn-NB • sv-SE • Other locale More languages Output format • html • text • asciidoc • rtf Rows per page • 5 • 10 • 20 • 50 • 100 • 250 Sort • Standard (Relevance) • Author A-Ö • Author Ö-A • Title A-Ö • Title Ö-A • Publication type A-Ö • Publication type Ö-A • Issued (Oldest first) • Created (Oldest first) • Last updated (Oldest first) • Disputation date (earliest first) • Disputation date (latest first) • Standard (Relevance) • Author A-Ö • Author Ö-A • Title A-Ö • Title Ö-A • Publication type A-Ö • Publication type Ö-A • Issued (Oldest first) • Created (Oldest first) • Last updated (Oldest first) • Disputation date (earliest first) • Disputation date (latest first) Select The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function. • 1. Abbasi Hoseini, A. KTH, School of Engineering Sciences (SCI), Mechanics, Fluid Physics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Chemical Science and Engineering (CHE), Centres, Wallenberg Wood Science Center. Finite-length effects on dynamical behavior of rod-like particles in wall-bounded turbulent flow2015In: International Journal of Multiphase Flow, ISSN 0301-9322, E-ISSN 1879-3533, Vol. 76, p. 13-21Article in journal (Refereed) Combined Particle Image Velocimetry (PIV) and Particle Tracking Velocimetry (PTV) measurements have been performed in dilute suspensions of rod-like particles in wall turbulence. PIV results for the turbulence field in the water table flow apparatus compared favorably with data from Direct Numerical Simulations (DNS) of channel flow turbulence and the universality of near-wall turbulence justified comparisons with DNS of fiber-laden channel flow. In order to examine any shape effects on the dynamical behavior of elongated particles in wall-bounded turbulent flow, fibers with three different lengths but the same diameter were used. In the logarithmic part of the wall-layer, the translational fiber velocity was practically unaffected by the fiber length l. In the buffer layer, however, the fiber dynamics turned out to be severely constrained by the distance z to the wall. The short fibers accumulated preferentially in low-speed areas and adhered to the local fluid speed. The longer fibers (l/z > 1) exhibited a bi-modal probability distribution for the fiber velocity, which reflected an almost equal likelihood for a long fiber to reside in an ejection or in a sweep. It was also observed that in the buffer region, high-speed long fibers were almost randomly oriented whereas for all size cases the slowly moving fibers preferentially oriented in the streamwise direction. These phenomena have not been observed in DNS studies of fiber suspension flows and suggested l/z to be an essential parameter in a new generation of wall-collision models to be used in numerical studies. • 2. Abreu, L. I. KTH, School of Engineering Sciences (SCI), Mechanics, Stability, Transition and Control. KTH, School of Engineering Sciences (SCI), Mechanics, Stability, Transition and Control. KTH, School of Engineering Sciences (SCI), Mechanics, Stability, Transition and Control. Wavepackets in turbulent flow over a NACA 4412 airfoil2018In: 31st Congress of the International Council of the Aeronautical Sciences, ICAS 2018, International Council of the Aeronautical Sciences , 2018Conference paper (Refereed) Turbulent flow over a NACA 4412 airfoil with an angle of attack AoA = 5◦ was analysed using an incompressible direct numerical simulation (DNS) at chord Reynolds number of Rec = 4 · 105. Snapshots of the flow field were analysed using the method of Spectral Proper Orthogonal Decomposition (SPOD) in frequency domain, in order to extract the dominant coherent structures of the flow. Focus is given to two-dimensional disturbances, known to be most relevant for aeroacoustics. The leading SPOD modes show coherent structures forming a wavepacket, with significant amplitudes in the trailing-edge boundary layer and in the wake. To model coherent structures in the turbulent boundary layer, the optimal harmonic forcing and the associated linear response of the flow were obtained using the singular value decomposition of the linear resolvent operator. The resolvent analysis shows that the leading SPOD modes can be associated to most amplified, linearised flow responses. Furthermore, coherent structures in the wake are modelled as the Kelvin-Helmholtz mode from linear stability theory (LST). • 3. KTH, School of Engineering Sciences (SCI), Mechanics. Fluid Dynamics of Phonation.2014Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis This thesis aims at presenting the studies conducted using computational modeling for understanding physiology of glottis and mechanism of phonation. The process of phonation occurs in the larynx, commonly called the voice box, due to the self-sustained vibrations induced in the vocal folds by the airflow. The physiology of glottis can be understood using fluid dynamics which is a vital process in developing and discovering voice disorder treatments. Simulations have been performed on a simplified two-dimensional version of the glottis to study the behavior of the vocal folds with help of fluid structure interaction. Fluid and structure interact in a two-way coupling and the flow is computed by solving 2D compressible Navier-Stokes equations. This report will present the modeling approach, solver characteristics and outcome of the three studies conducted; glottal gap study, Reynolds number study and elasticity study. • 4. KTH, School of Engineering Sciences (SCI), Mechanics. Investigation of Differences in Ansys Solvers CFX and Fluent2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis This thesis aims at presenting Computational Fluid Dynamics studies conducted on an axisymmetric model of the Siemens SGT-800 burner using Ansys Fluent, Ansys CFX and Ansys ICEM. The goal is to perform a mesh study and turbulence model study for isothermal flow. The result will show the differences observed while using the two solvers by Ansys, Fluent and CFX. Two different meshes, A, coarse and B, optimal have been used for the mesh study. This will reveal the mesh dependency of the different parameters and if any differences are observed between the solver’s convergence and mesh independency performance. To further validate the mesh independency, a simplified test case is simulated for turbulent flow for 32 different cases testing the numerical algorithms and spatial discretization available in Ansys Fluent and finding the optimal method to achieve convergence and reliable results. Turbulence model study has been performed where k-ε, k-ω and k-ω Shear Stress Transport (SST) model have been simulated and the results between solvers and models are compared to see if the solvers’ way of handling the different models varies.Studies from this thesis suggest that both solvers implement the turbulence models differently. Out of the three models compared, k-ω SST is the model with least differences between solvers. The solution looks alike and therefore it could be suggested to use this model, whenever possible, for future studies when both solvers are used. For the models k-ε and k-ω significant differences were found between the two solvers when comparing velocity, pressure and turbulence kinetic energy. Different reasons for its occurrence are discussed in the thesis and also attempts have been made to rule out few of the reasons to narrow down the possible causes. One of the goals of the thesis was to also discuss the differences in user-interface and solver capabilities which have been presented in the conclusions and discussions section of the report. Questions that still remain unanswered after the thesis are why these differences are present between solvers and which of the solvers’ results are more reliable when these differences have been found. • 5. Agarwal, A. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Mechanics. Transition to Turbulence in Viscoelastic Channel Flow2015In: Procedia IUTAM, Elsevier, 2015, p. 519-526Conference paper (Refereed) The influence of viscoelasticity on bypass transition to turbulence in channel flow is studied using data from direct numerical simulations by Agarwal et al. (2014) 1. The initial field is a superposition of a laminar base state and a localized disturbance. Relative to the Newtonian conditions, the polymeric FENE-P flow delays the onset of transition and extends its duration. The former effect is due to a weakening of the pre-transitional disturbance field, while the prolonged transition region is due to a slower spreading rate of the turbulent spots. Once turbulence occupies the full channel, a comparison of the turbulence fields shows that energetic flow structures are longer and wider in the polymeric flow. The final turbulent state is compared to elasto-inertial turbulence (EIT), where the polymer conformation field takes the form of elongated sheets with wide spanwise extent. © 2015 The Authors. • 6. Agarwal, Akshat KTH, School of Engineering Sciences (SCI), Mechanics. KTH, Centres, SeRC - Swedish e-Science Research Centre. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. Linear and nonlinear evolution of a localized disturbance in polymeric channel flow2014In: Journal of Fluid Mechanics, ISSN 0022-1120, E-ISSN 1469-7645, Vol. 760, p. 278-303Article in journal (Refereed) The evolution of an initially localized disturbance in polymeric channel flow is investigated, with the FENE-P model used to characterize the viscoelastic behaviour of the flow. In the linear growth regime, the flow response is stabilized by viscoelasticity, and the maximum attainable disturbance energy amplification is reduced with increasing polymer concentration. The reduction in the energy growth rate is attributed to the polymer work, which plays a dual role. First, a spanwise polymer-work term develops, and is explained by the tilting action of the wall-normal voracity on the mean streamwise conformation tensor. This resistive term weakens the spanwise velocity perturbation thus reducing the energy of the localized disturbance. The second action of the polymer is analogous, with a wall-normal polymer work term that weakens the vertical velocity perturbation. Its indirect effect on energy growth is substantial since it reduces the production of Reynolds shear stress and in turn of the streamwise velocity perturbation, or streaks. During the early stages of nonlinear growth, the dominant effect of the polymer is to suppress the large-scale streaky structures which are strongly amplified in Newtonian flows. As a result, the process of transition to turbulence is prolonged and, after transition, a drag-reduced turbulent state is attained. • 7. KTH, School of Engineering Sciences (SCI), Mechanics. Implementation of k-Exact Finite Volume Reconstruction in DLR’s Next-Generation CFD Solver: Flucs and its Comparison to Other Methods2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis This thesis extended the order of the reconstruction of state for convective fluxes used by Finite Volume (FV) algorithm in DLR’s next-generation CFD solver: Flucs, from constant and linear to quadratic and cubic. Two approaches for calculating derivatives were implemented in Flucs and some test cases were tried. To allow for integration of moments within each cell and a higher-order integration of fluxes, the mesh used by Discontinuous Galerkin (DG) was fed to the FV algorithm. Insufficient geometric treatment of the boundary cells and the dummy cells in FV is believed to be detrimental to the order of error reduction in NACA0012 case and the smooth bump case. In the smooth bump case, the FV algorithms failed to show higher than second order error reduction because of this reason. The order of the schemes away from the boundaries was verified with the Ehrenfried Vortex test case. For at least structured meshes and unstructured meshes with quads, schemes of order k approached k + 1 order accuracy on sufficiently fine meshes. The original goal of this thesis was partly accomplished and some further work in the code is expected. • 8. KTH, School of Engineering Sciences (SCI), Mechanics. An experimental study of fiber suspensions between counter-rotating discs2009Licentiate thesis, monograph (Other academic) The behavior of fibers suspended in a flow between two counter-rotating discs has been studied experimentally. This is inspired by the refining process in the papermaking process where cellulose fibers are ground between discs in order to change performance in the papermaking process and/or qualities of the final paper product. To study the fiber behavior in a counter-rotating flow, an experimental set-up with two glass discs was built. A CCD-camera was used to capture images of the fibers in the flow. Image analysis based on the concept of steerable filters extracted the position and orientation of the fibers in the plane of the discs. Experiments were performed for gaps of 0.1-0.9 fiber lengths, and for equal absolute values of the angular velocities for the upper and lower disc. The aspect ratios of the fibers were 7, 14 and 28. Depending on the angular velocity of the discs and the gap between them, the fibers were found to organize themselves in fiber trains. A fiber train is a set of fibers positioned one after another in the tangential direction with a close to constant fiber-to-fiber distance. In the fiber trains, each individual fiber is aligned in the radial direction (i.e. normal to the main direction of the train). The experiments show that the number of fibers in a train increases as the gap between the discs decreases. Also, the distance between the fibers in a train decreases as the length of the train increases, and the results for short trains are in accordance with previous numerical results in two dimensions.Furthermore, the results of different aspect ratios imply that there are three-dimensional fiber end-effects that are important for the forming of fiber trains. • 9. KTH, School of Engineering Sciences (SCI), Mechanics. A study of turbulence and scalar mixing in a wall-jet using direct numerical simulation2006Licentiate thesis, comprehensive summary (Other scientific) Direct numerical simulation is used to study the dynamics and mixing in a turbulent plane wall-jet. The investigation is undertaken in order to extend the knowledge base of the influence of the wall on turbulent dynamics and mixing. The mixing statistics produced can also be used to evaluate and develop models for mixing and combustion. In order to perform the simulations, a numerical code was developed. The code employs compact finite difference schemes, of high order, for spatial integration, and a low-storage Runge-Kutta method for the temporal integration. In the simulations performed the inlet based Reynolds and Mach numbers of the wall jet were Re = 2000 and M=0.5, respectively. Above the jet a constant coflow of 10% of the inlet jet velocity was applied. A passive scalar was added at the inlet of the jet, in a non-premixed manner, enabling an investigation of the wall-jet mixing as well as the dynamics. The mean development and the respective self-similarity of the inner and outer shear layers were studied. Comparisons of properties in the shear layers of different character were performed by applying inner and outer scaling. The characteristics of the wall-jet was compared to what has been observed in other canonical shear flows. In the inner part of the jet, 0 ≤ y+ ≤ 13, the wall-jet was found to closely resemble a zero pressure gradient boundary layer. The outer layer was found to resemble a free plane jet. The downstream growth rate of the scalar was approximately equal to that of the streamwise velocity, in terms of the growth rate of the half-width. The scalar fluxes in the streamwise and wall-normal direction were found to be of comparable magnitude. • 10. KTH, School of Engineering Sciences (SCI), Mechanics. Numerical studies of turbulent wall-jets for mixing and combustion applications2007Doctoral thesis, comprehensive summary (Other scientific) Direct numerical simulation is used to study turbulent plane wall-jets. The investigation is aimed at studying dynamics, mixing and reactions in wall bounded flows. The produced mixing statistics can be used to evaluate and develop models for mixing and combustion. An aim has also been to develop a simulation method that can be extended to simulate realistic combustion including significant heat release. The numerical code used in the simulations employs a high order compact finite difference scheme for spatial integration, and a low-storage Runge-Kutta method for the temporal integration. In the simulations the inlet based Reynolds and Mach numbers of the wall-jet are Re = 2000 and M=0.5 respectively, and above the jet a constant coflow of 10% of the inlet jet velocity is applied. The development of an isothermal wall-jet including passive scalar mixing is studied and the characteristics of the wall-jet are compared to observations of other canonical shear flows. In the near-wall region the jet resembles a zero pressure gradient boundary layer, while in the outer layer it resembles a plane jet. The scalar fluxes in the streamwise and wall-normal direction are of comparable magnitude. In order to study effects of density differences, two non-isothermal wall-jets are simulated and compared to the isothermal jet results. In the non-isothermal cases the jet is either warm and propagating in a cold surrounding or vice versa. The turbulence structures and the range of scales are affected by the density variation. The warm jet contains the largest range of scales and the cold the smallest. The differences can be explained by the varying friction Reynolds number. Conventional wall scaling fails due to the varying density. An improved collapse in the inner layer can be achieved by applying a semi-local scaling. The turbulent Schmidt and Prandtl number vary significantly only in the near-wall layer and in a small region below the jet center. A wall-jet including a single reaction between a fuel and an oxidizer is also simulated. The reactants are injected separately at the inlet and the reaction time scale is of the same order as the convection time scale and independent of the temperature. The reaction occurs in thin reaction zones convoluted by high intensity velocity fluctuations. • 11. KTH, School of Engineering Sciences (SCI), Mechanics, Turbulence. KTH, School of Engineering Sciences (SCI), Mechanics, Turbulence. KTH, School of Engineering Sciences (SCI), Mechanics, Turbulence. A numerical method for simulation of turbulence and mixing in a compressible wall-jet2007Report (Other academic) • 12. KTH, School of Engineering Sciences (SCI), Mechanics, Turbulence. KTH, School of Engineering Sciences (SCI), Mechanics, Turbulence. KTH, School of Engineering Sciences (SCI), Mechanics, Turbulence. Direct numerical simulation of a plane turbulent wall-jet including scalar mixing2007In: Physics of fluids, ISSN 1070-6631, E-ISSN 1089-7666, Vol. 19, no 6, p. 065102-Article in journal (Refereed) Direct numerical simulation is used to study a turbulent plane wall-jet including the mixing of a passive scalar. The Reynolds and Mach numbers at the inlet are Re=2000 and M=0.5, respectively, and a constant coflow of 10% of the inlet jet velocity is used. The passive scalar is added at the inlet enabling an investigation of the wall-jet mixing. The self-similarity of the inner and outer shear layers is studied by applying inner and outer scaling. The characteristics of the wall-jet are compared to what is reported for other canonical shear flows. In the inner part, the wall-jet is found to closely resemble a zero pressure gradient boundary layer, and the outer layer is found to resemble a free plane jet. The downstream growth rate of the scalar is approximately equal to that of the streamwise velocity in terms of the growth rate of the half-widths. The scalar fluxes in the streamwise and wall-normal direction are found to be of comparable magnitude. The scalar mixing situation is further studied by evaluating the scalar dissipation rate and the mechanical to mixing time scale ratio. • 13. KTH, School of Engineering Sciences (SCI), Mechanics, Turbulence. KTH, School of Engineering Sciences (SCI), Mechanics, Turbulence. KTH, School of Engineering Sciences (SCI), Mechanics, Turbulence. Direct numerical simulation of a reacting turbulent wall-jet2007Report (Other academic) • 14. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Mechanics, Turbulence. KTH, School of Engineering Sciences (SCI), Mechanics, Turbulence. Direct numerical simulation of non-isothermal turbulent wall-jets2009In: Physics of fluids, ISSN 1070-6631, E-ISSN 1089-7666, Vol. 21, no 3Article in journal (Refereed) Direct numerical simulations of plane turbulent nonisothermal wall jets are performed and compared to the isothermal case. This study concerns a cold jet in a warm coflow with an ambient to jet density ratio of ρa/ρj = 0.4, and a warm jet in a cold coflow with a density ratio of ρa/ρj = 1.7. The coflow and wall temperature are equal and a temperature dependent viscosity according to Sutherland’s law is used. The inlet Reynolds and Mach numbers are equal in all these cases. The influence of the varying temperature on the development and jet growth is studied as well as turbulence and scalar statistics. The varying density affects the turbulence structures of the jets. Smaller turbulence scales are present in the warm jet than in the isothermal and cold jet and consequently the scale separation between the inner and outer shear layer is larger. In addition, a cold jet in a warm coflow at a higher inlet Reynolds number was also simulated. Although the domain length is somewhat limited, the growth rate and the turbulence statistics indicate approximate self-similarity in the fully turbulent region. The use of van Driest scaling leads to a collapse of all mean velocity profiles in the near-wall region. Taking into account the varying density by using semilocal scaling of turbulent stresses and fluctuations does not completely eliminate differences, indicating the influence of mean density variations on normalized turbulence statistics. Temperature and passive scalar dissipation rates and time scales have been computed since these are important for combustion models. Except for very near the wall, the dissipation time scales are rather similar in all cases and fairly constant in the outer region. • 15. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Mechanics, Turbulence. KTH, School of Engineering Sciences (SCI), Mechanics. Direct numerical simulation of mixing in a plane compressible and turbulent wall jet2005In: 4th International Symposium on Turbulence and Shear Flow Phenomena, 2005, p. 1131-1136Conference paper (Refereed) Direct numerical simulation (DNS) is used to simulate the mixing of a passive scalar in a plane compressible and turbulent wall jet. The Mach number of the jet is M = 0.5 at the inlet. The downstream development of the jet is studied and compared to experimental data. Mixing in the inner and outer shear layers of the wall jet is investigated through scalar fluxes, the probability density function of the scalar concentration and the joint probability density function of the wall normal velocity fluctuation and the scalar concentration • 16. KTH, School of Engineering Sciences (SCI), Mechanics. Aeroelastic FE modelling of wind turbine dynamicsIn: Computers & structures, ISSN 0045-7949, E-ISSN 1879-2243Article in journal (Refereed) By designing wind turbines with very flexible components it is possible toreduce loads and consequently the associated cost. As a result, the increased flexibilitywill introduce geometrical nonlinearities. Design tools that can cope with those nonlinearitieswill therefore be necessary at some stage of the design process. The developedmodel uses the commercial finite element system MSC.Marc, which is an advanced finiteelement system focused on nonlinear design and analysis, to predict the structuralresponse. The aerodynamic model named AERFORCE, used to transform the wind toloads on the blades, is a Blade-Element/Momentum model, developed by The SwedishDefence Research Agency (FOI, previously named FFA). The paper describes the developedmodel with focus on component modelling to allow for geometrical nonlinearities.Verification results are presented and discussed for an extensively tested Danwin 180 kWstall-controlled wind turbine. Code predictions of mechanical loads, fatigue and spectralproperties, obtained at normal operational conditions, have been compared with measurements.The simulated results correspond well with measurements. Results from a bladeloss simulation are presented to exemplify the versatility of the developed code. • 17. KTH, School of Engineering Sciences (SCI), Mechanics. Aerolastic simulation of wind turbine dynamics2005Doctoral thesis, comprehensive summary (Other academic) The work in this thesis deals with the development of an aeroelastic simulation tool for horizontal axis wind turbine applications. Horizontal axis wind turbines can experience significant time varying aerodynamic loads, potentially causing adverse effects on structures, mechanical components, and power production. The needs for computational and experimental procedures for investigating aeroelastic stability and dynamic response have increased as wind turbines become lighter and more flexible. A finite element model for simulation of the dynamic response of horizontal axis wind turbines has been developed. The developed model uses the commercial finite element system MSC.Marc, focused on nonlinear design and analysis, to predict the structural response. The aerodynamic model, used to transform the wind flow field to loads on the blades, is a Blade-Element/Momentum model. The aerodynamic code is developed by The Swedish Defence Research Agency (FOI, previously named FFA) and is a state-of-the-art code incorporating a number of extensions to the Blade-Element/Momentum formulation. The software SOSIS-W, developed by Teknikgruppen AB was used to generate wind time series for modelling different wind conditions. The method is general, and different configurations of the structural model and various type of wind conditions can be simulated. The model is primarily intended for use as a research tool when influences of specific dynamic effects are investigated. Verification results are presented and discussed for an extensively tested Danwin 180 kW stall-controlled wind turbine. Code predictions of mechanical loads, fatigue and spectral properties, obtained at different conditions, have been compared with measurements. A comparison is also made between measured and calculated loads for the Tjæreborg 2 MW wind turbine during emergency braking of the rotor. The simulated results correspond well to measured data. • 18. KTH, School of Engineering Sciences (SCI), Mechanics. Emergency stop simulation using a finite element model developed for large blade deflections2006In: Wind Energy, ISSN 1095-4244, E-ISSN 1099-1824, Vol. 9, no 3, p. 193-210Article in journal (Refereed) Predicting the load in every possible situation is necessary in order to build safe and optimized structures. A highly dynamical case where large loads are developed is an emergency stop. Design simulation tools that can cope with the upcoming non-linearities will be especially important as the turbines get bigger and more flexible. The model developed here uses the advanced commercial finite element system MSC.Marc, focused on non-linear design and analysis, to predict the structural response. The aerodynamic model named AERFORCE, used to transform the wind to loads on the blades, is a blade element momentum model. A comparison is made between measured and calculated loads for the Tjaere-borg wind turbine during emergency braking of the rotor. The simulation results correspond well with measured data. The conclusion is that the aeroelastic tool is likely to perform well when simulating more flexible turbines. • 19. KTH, School of Engineering Sciences (SCI), Mechanics. Influence of wind turbine flexibility on loads and power production2006In: Wind Energy, ISSN 1095-4244, E-ISSN 1099-1824, Vol. 9, no 3, p. 237-249Article in journal (Refereed) Most aeroelastic codes used today assume small blade deflections and application of loads on the undeflected structure. However, with the design of lighter and more flexible wind turbines, this assumption is not obvious. By scaling the system mass and stiffness properties equally, it is possible to compare wind turbines of different degrees of slenderness and at the same time keep system frequencies the some in an undeformed state. The developed model uses the commercial finite element system MSC. Marc, focused on non-linear design and analysis, to predict the structural response. The aerodynamic model AERFORCE, used to transform the wind to loads on the blades, is a blade element momentum model. A comparison is made between different slenderness ratios in three wind conditions below rated wind speed. The results show that large blade deflections have a major influence on power production and the resulting structural loads and must be considered in the design of very slender turbines. • 20. KTH, School of Engineering Sciences (SCI), Mechanics. Non-linear vibrations of tensegrity structures2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis A study has been done on different methods to solve the linear and non-linear problems with single or multi degrees of freedom structures. To do that direct time integration methods are used to solve the dynamic equilibrium equations. It has been tried to perform general methods to apply in most structures. In this thesis structures are made of cables and bars but the solution methods which is presented can be applied for any structure, and to utilize those implicit and explicit methods for all structures it is needed to know the tangent stiffness matrix and mass matrix for the structure. Then, it would be possible to analyze the dynamic response of structures under general loads and by general it can be understood that by application of arbitrary forces on different nodes of structure, the method generates result based on applied forces. It is crucial to get the right tangent stiffness matrix and mass matrix to know the dynamic equation in each node. Hence, the method will then work correctly to solve the dynamic problems. More, a parametric study has been done to see the effects of times step, stiffness of elements, length of elements, and other mechanical properties of elements, and this parametric study enables one to produce new results by changing every parameter. Also, continuation of the study on x-frame tensegrity has been done by solving them to check out dynamic response of structure with proposed methods of this thesis. Moreover, a method is presented to use the codes of solver methods of current thesis to apply them for other structures. Hence, as a future work, one can combine the codes of structures and solver codes of this thesis for dynamic response of structure. In fact the main effort of this thesis is on presenting different methods to solve various structures. • 21. KTH, School of Engineering Sciences (SCI), Mechanics. Numerical simulations of micro-organisms in shear flows2011Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis • 22. KTH, School of Engineering Sciences (SCI), Mechanics. Numerical studies on receptivity and control of a three-dimensional boundary layer2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis Receptivity in three-dimensional boundary layer ow to localized roughness elements over a at plate is studied by means of direct numerical simulations (DNS). The surface roughness is modeled by applying nonhomogeneous boundary conditions along the wall as well as considering as a surface deformation by inserting the bump shape into the numerical mesh. Under the assumption of the small amplitudes of the roughness, although dierent disturbances amplitudes are observed in the vicinity of the bump for the meshed and modeled case, the boundary layer response downstream of the roughness is independent if the way of the bump implementation. Dierent roughness heights are considered in order to compare the boundary layer response of two approaches. Also, the boundary layer is excited by random distributed surface roughness and the receptivity results are studied. Moreover, a simple model for natural roughness excites steady multi wavenumber crossow instabilities. A localised surface roughness i.e. control roughness is applied to stabilise the latter. The control mode which is subcritical with respect to transition aects the most steady unstable mode. Suppression of the most dangerous mode is observed through nonlinear interactions with the control mode. • 23. KTH, School of Engineering Sciences (SCI), Mechanics, Structural Mechanics. KTH, School of Engineering Sciences (SCI), Mechanics, Structural Mechanics. KTH, School of Engineering Sciences (SCI), Mechanics, Structural Mechanics. KTH, School of Engineering Sciences (SCI), Mechanics, Structural Mechanics. KTH, School of Engineering Sciences (SCI), Mechanics, Structural Mechanics. KTH, School of Electrical Engineering (EES), Space and Plasma Physics. KTH, School of Engineering Sciences (SCI), Mechanics, Structural Mechanics. KTH, School of Engineering Sciences (SCI), Mechanics, Structural Mechanics. KTH, School of Engineering Sciences (SCI), Mechanics, Structural Mechanics. KTH, School of Engineering Sciences (SCI), Mechanics, Structural Mechanics. KTH, School of Engineering Sciences (SCI), Mechanics, Structural Mechanics. KTH, School of Engineering Sciences (SCI), Mechanics, Structural Mechanics. KTH, School of Engineering Sciences (SCI), Mechanics, Structural Mechanics. The SQUID sounding rocket experiment2011In: Proceedings of the 20th ESA Symposium on European Rocket and Balloon Programmes and Related Research, European Space Agency, 2011, p. 159-166Conference paper (Refereed) The objective of the SQUID project is to develop and in flight verify a miniature version of a wire boom deployment mechanism to be used for electric field measurements in the ionosphere. In February 2011 a small ejectable payload, built by a team of students from The Royal Institute of Technology (KTH), was launched from Esrange on-board the REXUS-10 sounding rocket. The payload separated from the rocket, deployed and retracted the wire booms, landed with a parachute and was subsequently recovered. Here the design of the experiment and post fight analysis are presented. • 24. KTH, School of Engineering Sciences (SCI), Mechanics, Physicochemical Fluid Mechanics. Phase change, surface tension and turbulence in real fluids2016Doctoral thesis, comprehensive summary (Other academic) Sprays are extensively used in industry, especially for fuels in internal combustion and gas turbine engines. An optimal fuel/air mixture prior to combustion is desired for these applications, leading to greater efficiency and minimal levels of emissions. The optimization depends on details regarding the different breakups, evaporation and mixing processes. Besides, one should take into consideration that these different steps depend on physical properties of the gas and fuel, such as density, viscosity, heat conductivity and surface tension. In this thesis the phase change and surface tension of a droplet for different flow conditions are studied by means of numerical simulations.This work is part of a larger effort aiming to developing models for sprays in turbulent flows. We are especially interested in the atomization regime, where the liquid breakup causes the formation of droplet sizes much smaller than the jet diameter. The behavior of these small droplets is important to shed more light on how to achieve the homogeneity of the gas-fuel mixture as well as that it directly contributes to the development of large-eddy simulation (LES) models. The numerical approach is a challenging process as one must take into account the transport of heat, mass and momentum for a multiphase flow. We choose a lattice Boltzmann method (LBM) due to its convenient mesoscopic natureto simulate interfacial flows. A non-ideal equation of state is used to control the phase change according to local thermodynamic properties. We analyze the droplet and surrounding vapor for a hydrocarbon fuel close to the critical point. Under forced convection, the droplet evaporation rate is seen to depend on the vapor temperatureand Reynolds number, where oscillatory flows can be observed. Marangoni forces are also present and drivethe droplet internal circulation once the temperature difference at the droplet surface becomes significant.In isotropic turbulence, the vapor phase shows increasing fluctuations of the thermodynamic variables oncethe fluid approaches the critical point. The droplet dynamics is also investigated under turbulent conditions, where the presence of coherent structures with strong shear layers affects the mass transfer between the liquid-vapor flow, showing also a correlation with the droplet deformation. Here, the surface tension and droplet size play a major role and are analyzed in detail. • 25. KTH, School of Engineering Sciences (SCI), Mechanics, Physicochemical Fluid Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Mechanics, Physicochemical Fluid Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Mechanics, Physicochemical Fluid Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. Multirelaxation-time lattice Boltzmann model for droplet heating and evaporation under forced convection2015In: Physical Review E. Statistical, Nonlinear, and Soft Matter Physics, ISSN 1539-3755, E-ISSN 1550-2376, Vol. 91, no 4, article id 043012Article in journal (Refereed) We investigate the evaporation of a droplet surrounded by superheated vapor with relative motion between phases. The evaporating droplet is a challenging process, as one must take into account the transport of mass, momentum, and heat. Here a lattice Boltzmann method is employed where phase change is controlled by a nonideal equation of state. First, numerical simulations are compared to the D-2 law for a vaporizing static droplet and good agreement is observed. Results are then presented for a droplet in a Lagrangian frame under a superheated vapor flow. Evaporation is described in terms of the temperature difference between liquid-vapor and the inertial forces. The internal liquid circulation driven by surface-shear stresses due to convection enhances the evaporation rate. Numerical simulations demonstrate that for higher Reynolds numbers, the dynamics of vaporization flux can be significantly affected, which may cause an oscillatory behavior on the droplet evaporation. The droplet-wake interaction and local mass flux are discussed in detail. • 26. KTH, School of Engineering Sciences (SCI), Mechanics, Physicochemical Fluid Mechanics. KTH, School of Engineering Sciences (SCI), Mechanics, Physicochemical Fluid Mechanics. KTH, School of Engineering Sciences (SCI), Mechanics, Physicochemical Fluid Mechanics. Simulation of a suspended droplet under evaporation with Marangoni effects2016In: International Journal of Heat and Mass Transfer, ISSN 0017-9310, E-ISSN 1879-2189, Vol. 91, p. 853-860Article in journal (Refereed) We investigate the Marangoni effects in a hexane droplet under evaporation and close to its critical point. A lattice Boltzmann model is used to perform 3D numerical simulations. In a first case, the droplet is placed in its own vapor and a temperature gradient is imposed. The droplet locomotion through the domain is observed, where the temperature differences across the surface is proportional to the droplet velocity and the Marangoni effect is confirmed. The droplet is then set under a forced convection condition. The results show that the Marangoni stresses play a major role in maintaining the internal circulation when the superheated vapor temperature is increased. Surprisingly, surface tension variations along the interface due to temperature change may affect heat transfer and internal circulation even for low Weber number. Other results and considerations regarding the droplet surface are also discussed. • 27. KTH, School of Engineering Sciences (SCI), Mechanics, Physicochemical Fluid Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Mechanics, Physicochemical Fluid Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Mechanics, Physicochemical Fluid Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. Lattice Boltzmann Method for the evaporation of a suspended droplet2013In: Interfacial phenomena and heat transfer, ISSN 2167-857X, Vol. 1, p. 245-258Article in journal (Refereed) In this paper we consider a thermal multiphase lattice Boltzmann method (LBM) to investigate the heating and vaporization of a suspended droplet. An important benefit from the LBM is that phase separation is generated spontaneously and jump conditions for heat and mass transfer are not imposed. We use double distribution functions in order to solve for momentum and energy equations. The force is incorporated via the exact difference method (EDM) scheme where different equations of state (EOS) are used, including the Peng-Robinson EOS. The equilibrium and boundary conditions are carefully studied. Results are presented for a hexane droplet set to evaporate in a superheated gas, for static condition and under gravitational effects. For the static droplet, the numerical simulations show that capillary pressure and the cooling effect at the interface play a major role. When the droplet is convected due to the gravitational field, the relative motion between the droplet and surrounding gas enhances the heat transfer. Evolution of density and temperature fields are illustrated in details. • 28. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Mechanics, Physicochemical Fluid Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. Droplet deformation and heat transfer in isotropic turbulence2017In: Journal of Fluid Mechanics, ISSN 0022-1120, E-ISSN 1469-7645, Vol. 820, p. 61-85Article in journal (Refereed) The heat and mass transfer of deformable droplets in turbulent flows is crucial. to a wide range of applications, such as cloud dynamics and internal combustion engines. This study investigates a single droplet undergoing phase change in isotropic turbulence using numerical simulations with a hybrid lattice Boltzmann scheme. Phase separation is controlled by a non-ideal equation of state and density contrast is taken into consideration. Droplet deformation is caused by pressure and shear stress at the droplet interface. The statistics of thermodynamic variables are quantified and averaged over both the liquid and vapour phases. The occurrence of evaporation and condensation is correlated to temperature fluctuations, surface tension variation and turbulence intensity. The temporal spectra of droplet deformations are analysed and related to the droplet surface area. Different modes of oscillation are clearly identified from the deformation power spectrum for low Taylor Reynolds number Re, whereas nonlinearities are produced with the increase of Re A, as intermediate frequencies are seen to overlap. As an outcome, a continuous spectrum is observed, which shows a decrease in the power spectrum that scales as similar to f(-3) Correlations between the droplet Weber number, deformation parameter, fluctuations of the droplet volume and thermodynamic variables are also developed. • 29. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Mechanics, Physicochemical Fluid Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. Thermodynamics of a real fluid near the critical point in numerical simulations of isotropic turbulence2016In: Physics of fluids, ISSN 1070-6631, E-ISSN 1089-7666, Vol. 28, no 12, article id 125105Article in journal (Refereed) We investigate the behavior of a fluid near the critical point by using numerical simulations of weakly compressible three-dimensional isotropic turbulence. Much has been done for a turbulent flow with an ideal gas. The primary focus of this work is to analyze fluctuations of thermodynamic variables (pressure, density, and temperature) when a non-ideal Equation Of State (EOS) is considered. In order to do so, a hybrid lattice Boltzmann scheme is applied to solve the momentum and energy equations. Previously unreported phenomena are revealed as the temperature approaches the critical point. Fluctuations in pressure, density, and temperature increase, followed by changes in their respective probability density functions. Due to the non-linearity of the EOS, it is seen that variances of density and temperature and their respective covariance are equally important close to the critical point. Unlike the ideal EOS case, significant differences in the thermodynamic properties are also observed when the Reynolds number is increased. We also address issues related to the spectral behavior and scaling of density, pressure, temperature, and kinetic energy. • 30. KTH, School of Engineering Sciences (SCI), Mechanics, Physicochemical Fluid Mechanics. KTH, School of Engineering Sciences (SCI), Mechanics, Physicochemical Fluid Mechanics. University of Washington, USA. KTH, School of Engineering Sciences (SCI), Mechanics, Physicochemical Fluid Mechanics. Droplet deformation and heat transfer in isotropic turbulence2016Manuscript (preprint) (Other academic) The heat and mass transfer of deformable droplets in turbulent flows is crucial to a wide range of applications, such as cloud dynamics and internal combustion engines. This study investigates a droplet undergoing phase change in isotropic turbulence using numerical simulations with a hybrid lattice Boltzmann scheme. We solve the momentum and energy transport equations, where phase separation is controlled by a non-ideal equation of state and density contrast is taken into consideration. Deformation is caused by pressure and shear stress at the droplet interface. The statistics of thermodynamic variables is quantified and averaged in terms of the liquid and vapor phases. The occurrence of evaporation and condensation is correlated to temperature fluctuations, surface tension variation and turbulence intensity. The temporal spectra of droplet deformations are analyzed and related to the droplet surface area.Different modes of oscillation are clearly identified from the deformation power spectrum for low Taylor Reynolds number $Re_\lambda$, whereas nonlinearities are produced with the increase of $Re_\lambda$, as intermediate frequencies are seen to overlap. As an outcome a continuous spectrum is observed, which shows a decrease that scales as $\sim f^{-3}$.Correlations between the droplet Weber number, deformation parameter, fluctuations of the droplet volume and thermodynamic variables are also examined. • 31. KTH, School of Engineering Sciences (SCI), Mechanics, Physicochemical Fluid Mechanics. KTH, School of Engineering Sciences (SCI), Mechanics, Physicochemical Fluid Mechanics. University of Washington, USA. KTH, School of Engineering Sciences (SCI), Mechanics, Physicochemical Fluid Mechanics. Real fluids near the critical point in isotropic turbulenceIn: Physics of fluids, ISSN 1070-6631, E-ISSN 1089-7666Article in journal (Refereed) We investigate the behavior of a uid near the critical point by using numerical simulations of weakly compressible three-dimensional isotropic turbulence. Much has been done for a turbulent ow with an ideal gas. The primary focus of this work is to analyze uctuations of thermodynamic variables (pressure, density and temperature) when a non-ideal Equation Of State (EOS) is considered. In order to do so, a hybrid lattice Boltzmann scheme is applied to solve the momentum and energy equations. Previously unreported phenomena are revealed as the temperature approaches the critical point. These phenomena include increased uctuations in pressure, density and temperature, followed by changes in their respective probability density functions (PDFs). Unlike the ideal EOS case, signicant dierences in the thermodynamic properties are also observed when the Reynolds number is increased. We also address issues related to the spectral behavior and scaling of density, pressure, temperature and kinetic energy. • 32. KTH, School of Chemical Science and Engineering (CHE), Chemistry, Analytical Chemistry. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Chemical Science and Engineering (CHE), Chemistry, Analytical Chemistry. Multi-step dielectrophoresis for separation of particles2006In: Journal of Chromatography A, ISSN 0021-9673, E-ISSN 1873-3778, Vol. 1131, no 1-2, p. 261-266Article in journal (Refereed) A new concept for separation of particles based on repetitive dielectrophoretic trapping and release in a flow system is proposed. Calculations using the finite element method have been performed to envision the particle behavior and the separation effectiveness of the proposed method. As a model system, polystyrene beads in deionized water and a micro-flow channel with arrays of interdigited electrodes have been used. Results show that the resolution increases as a direct function of the number of trap-and-release steps, and that a difference in size will have a larger influence on the separation than a difference in other dielectrophoretic properties. About 200 trap-and-release steps would be required to separate particles with a size difference of 0.2%. The enhanced separation power of dielectrophoresis with multiple steps could be of great importance, not only for fractionation of particles with small differences in size, but also for measuring changes in surface conductivity, or for separations based on combinations of difference in size and dielectric properties. • 33. KTH, School of Chemical Science and Engineering (CHE), Chemistry, Analytical Chemistry. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Chemical Science and Engineering (CHE), Chemistry, Analytical Chemistry. KTH, School of Engineering Sciences (SCI), Mechanics. Superpositioned dielectrophoresis for enhanced trapping efficiency2005In: Electrophoresis, ISSN 0173-0835, E-ISSN 1522-2683, Vol. 26, no 22, p. 4252-4259Article in journal (Refereed) One of the major applications for dielectrophoresis is selective trapping and fractionation of particles. If the surrounding medium is of low conductivity, the trapping force is high, but if the conductivity increases, the attraction decreases and may even become negative. However, high-conductivity media are essential when working with biological material such as living cells. In this paper, some basic calculations have been performed, and a model has been developed which employs both positive and negative dielectrophoresis in a channel with interdigitated electrodes. The finite element method was utilized to predict the trajectories of Escherichia coli bacteria in the superpositioned electrical fields. It is shown that a drastic improvement of trapping efficiency can be obtained in this way, when a high conductivity medium is employed. • 34. Alegre, Cesar KTH, School of Engineering Sciences (SCI), Mechanics. The effect or arterial flow elasticity on the flow through a stenosis2016In: Proceedings of the International Conference on Bionic Engineering 2016, 2016Conference paper (Refereed) • 35. Alegre-Martínez, C. KTH, School of Engineering Sciences (SCI), Mechanics. On the axial distribution of plaque stress: Influence of stenosis severity, lipid core stiffness, lipid core length and fibrous cap stiffness2019In: Medical Engineering and Physics, Vol. 68, p. 76-84Article in journal (Refereed) Numerical simulations of blood flow through a partially-blocked axisymmetric artery are performed to investigate the stress distributions in the plaque. We show that the combined effect of stenosis severity and the stiffness of the lipid core can drastically change the axial stress distribution, strongly affecting the potential sites of plaque rupture. The core stiffness is also an important factor when assessing plaque vulnerability, where a mild stenosis with a lipid-filled core presents higher stress levels than a severe stenosis with a calcified plaque. A shorter lipid core gives rise to an increase in the stress levels. However, the fibrous cap stiffness does not influence the stress distributions for the range of values considered in this work. Based on these mechanical analyses, we identify potential sites of rupture in the axial direction for each case: the midpoints of the upstream and downstream regions of the stenosis (for severe, lipid-filled plaques), the ends of the lipid core (for short cores), and the middle of the stenosis (for mild stenoses with positive remodelling of the arterial wall). • 36. KTH, School of Engineering Sciences (SCI), Aeronautical and Vehicle Engineering, MWL Flow acoustics. KTH, School of Industrial Engineering and Management (ITM), Centres, Competence Center for Gas Exchange (CCGEx). KTH, School of Engineering Sciences (SCI), Aeronautical and Vehicle Engineering, MWL Flow acoustics. KTH, School of Industrial Engineering and Management (ITM), Centres, Competence Center for Gas Exchange (CCGEx). KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Industrial Engineering and Management (ITM), Centres, Competence Center for Gas Exchange (CCGEx). Large eddy simulations of acoustic-flow interaction at an orifice plate2015In: Journal of Sound and Vibration, ISSN 0022-460X, E-ISSN 1095-8568, Vol. 345, p. 162-177Article in journal (Refereed) The scattering of plane waves by an orifice plate with a strong bias flow, placed in a circular or square duct, is studied through large eddy simulations and dynamic mode decomposition. The acoustic-flow interaction is illustrated, showing that incoming sound waves at a Strouhal number of 0.43 trigger a strong axisymmetric flow structure in the orifice in the square duct, and interact with a self-sustained axisymmetric oscillation in the circular duct orifice. These structures then generate a strong sound, increasing the acoustic energy at the frequency of the incoming wave. The structure triggered in the square duct is weaker than that present in the circular duct, but stronger than structures triggered by waves at other frequencies. Comparing the scattering matrix with measurements, there is a good agreement. However, the results are found to be sensitive to the inflow, where the self-sustained oscillation in the circular duct simulation is an artefact of an axisymmetric, undisturbed inflow. This illustrates a problem with using an undisturbed inflow for studying vortex-sound effects, and can be of interest when considering musical instruments, where the aim is to get maximum amplification of specific tones. Further, it illustrates that at the frequency where an amplification of acoustic energy is found for the orifice plate, the flow has a natural instability, which is suppressed by non-axisymmetry and incoming disturbances. • 37. KTH, School of Engineering Sciences (SCI), Aeronautical and Vehicle Engineering, MWL Flow acoustics. KTH, School of Industrial Engineering and Management (ITM), Centres, Competence Center for Gas Exchange (CCGEx). KTH, School of Engineering Sciences (SCI), Aeronautical and Vehicle Engineering, MWL Flow acoustics. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. LES of Acoustic-Flow Interaction at an Orifice Plate2012In: 18th AIAA/CEAS Aeroacoustics Conference (33rd AIAA Aeroacoustics Conference), 2012Conference paper (Other academic) The scattering of plane waves by a thick orifice plate, placed in a circular or square duct with flow, is studied through Large Eddy Simulation. The scattering matrix is computed and compared to measurements, showing reasonably good agreement except around one frequency ($St \approx 0.4$). Here a stronger amplification of acoustic energy is observed in the circular duct simulations than in the measurements and the square duct simulations. In order to improve the understanding of the interaction between an incoming wave, the flow, and the plate, a few frequencies are studied in more detail. A Dynamic Mode Decomposition is performed to identify flow structures at significant frequencies. This shows that the amplification of acoustic energy occurs at the frequency where the jet in the circular duct has an axisymmetric instability. Furthermore, the incoming wave slightly amplifies this instability, and suppresses background flow fluctuations. • 38. KTH, School of Engineering Sciences (SCI), Aeronautical and Vehicle Engineering, MWL Flow acoustics. KTH, School of Industrial Engineering and Management (ITM), Centres, Competence Center for Gas Exchange (CCGEx). KTH, School of Engineering Sciences (SCI), Aeronautical and Vehicle Engineering, MWL Flow acoustics. KTH, School of Industrial Engineering and Management (ITM), Centres, Competence Center for Gas Exchange (CCGEx). KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Industrial Engineering and Management (ITM), Centres, Competence Center for Gas Exchange (CCGEx). Scattering of Plane Waves by a Constriction2011In: Proceedings of ASME Turbo Expo 2011, Vol 7, Parts A-C, American Society Of Mechanical Engineers , 2011, p. 1043-1052Conference paper (Refereed) Liner scattering of low frequency waves by an orifice plate has been studied using Large Eddy Simulation and an acoustic two-port model. The results have been compared to measurements with good agreement for waves coming from the downstream side. For waves coming from the upstream side the reflection is over-predicted, indicating that not enough of the acoustic energy is converted to vorticity at the upstream edge of the plate. Furthermore, the sensitivity to the amplitude of the acoustic waves has been studied, showing difficulties to simultaneously keep the amplitude low enough for linearity and high enough to suppress flow noise with the relatively short times series available in LES. • 39. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. University of Cambridge, United Kingdom . KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. Turbulent boundary layers over flat plates and rotating disks-The legacy of von Karman: A Stockholm perspective2013In: European journal of mechanics. B, Fluids, ISSN 0997-7546, E-ISSN 1873-7390, Vol. 40, p. 17-29Article in journal (Refereed) Many of the findings and ideas of von Karman are still of interest to the fluid dynamics community. For instance, his result that the mean velocity distribution in turbulent flows has a logarithmic behavior with respect to the distance from the centreline is still a cornerstone for everybody working in wall-bounded turbulence and was first presented to an international audience in Stockholm at the Third International Congress for Applied Mechanics in 1930. In this paper we discuss this result and also how the so-called von Karman constant can be determined in a new simple way. We also discuss the possibility of a second (outer) maximum of the streamwise velocity fluctuations, a result that was implicit in some of the assumptions proposed by von Karman. • 40. KTH, School of Engineering Sciences (SCI), Mechanics, Fluid Physics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. University of Cambridge, United Kingdom . Rotation Effects on Wall-Bounded Flows: Some Laboratory Experiments2014In: Modeling Atmospheric and Oceanic Flows: Insights from Laboratory Experiments and Numerical Simulations, Wiley-Blackwell, 2014, p. 83-100Chapter in book (Other academic) This chapter focuses on three different categories: (1) system rotation vector parallel to mean-flow vorticity; (2) flows set up by the rotation of one or more boundaries; and (3) system rotation aligned with the mean-flow direction. The flows in the different categories above differ with respect to their geometry but, more importantly, in how rotation affects them. The chapter focuses on three different flows that are relatively amenable to laboratory investigation, one from each category described above: One is plane Couette flow undergoing system rotation about an axis normal to the mean flow, another is the von Kármán boundary layer flow, and the third is axially rotating pipe flow. It defines important nondimensional parameters that govern them and discuss some of their interesting flow features in various parameter ranges. Various experimental realizations of the three different flow systems are described and considerations and limitations regarding the laboratory systems are discussed. • 41. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Mechanics. A new scaling for the streamwise turbulence intensity in wall-bounded turbulent flows and what it tells us about the "outer" peak2011In: Physics of fluids, ISSN 1070-6631, E-ISSN 1089-7666, Vol. 23, no 4, p. 041702-Article in journal (Refereed) One recent focus of experimental studies of turbulence in high Reynolds number wall-bounded flows has been the scaling of the root mean square of the fluctuating streamwise velocity, but progress has largely been impaired by spatial resolution effects of hot-wire sensors. For the near-wall peak, recent results seem to have clarified the controversy; however, one of the remaining issues in this respect is the emergence of a second (so-called outer) peak at high Reynolds numbers. The present letter introduces a new scaling of the local turbulence intensity profile, based on the diagnostic plot by Alfredsson and Orlu [Eur. J. Mech. B/Fluids 42, 403 (2010)], which predicts the location and amplitude of the "outer" peak and suggests its presence as a question of sufficiently large scale separation. • 42. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Mechanics. Instability, transition and turbulence in plane Couette flow with system rotation2005In: IUTAM Symposium on Laminar-Turbulent Transition and Finite Amplitude Solutions / [ed] Mullin, T; Kerswell, R, Springer Netherlands, 2005, Vol. 77, p. 173-193Conference paper (Refereed) System rotation may have either stabilizing or destabilizing effects on shear flows depending on the direction of rotation vector as compared to the vorticity vector of mean flow. This study describes experimental results of laminar, transitional and turbulent plane Couette flow with both stabilizing and destabilizing system rotation. For laminar flow with destabilizing rotation roll cells appear in the flow which may undergo several different types of secondary instabilities, especially interesting is a repeating pattern of wavy structures followed by breakdown, thereafter roll cells reappear in a cyclic pattern. For higher Reynolds number roll cells appear also in a turbulent environment. It is also shown how stabilizing rotation may quench the turbulence completely. • 43. KTH, School of Engineering Sciences (SCI), Mechanics, Fluid Physics. KTH Mech, Royal Inst Technol, Linne FLOW Ctr, S-10044 Stockholm, Sweden.. KTH, School of Engineering Sciences (SCI), Mechanics, Fluid Physics. KTH Mech, Royal Inst Technol, Linne FLOW Ctr, S-10044 Stockholm, Sweden.. A New Way to Determine the Wall Position and Friction Velocity in Wall-Bounded Turbulent Flows2012In: PROGRESS IN TURBULENCE AND WIND ENERGY IV / [ed] Oberlack, M Peinke, J Talamelli, A Castillo, L Holling, M, SPRINGER-VERLAG BERLIN , 2012, p. 181-185Conference paper (Refereed) • 44. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH Mech, Linne FLOW Ctr, SE-10044 Stockholm, Sweden.. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. Large-Eddy BreakUp Devices - a 40 Years Perspective from a Stockholm Horizon2018In: Flow Turbulence and Combustion, ISSN 1386-6184, E-ISSN 1573-1987, Vol. 100, no 4, p. 877-888Article in journal (Refereed) In the beginning of the 1980's Large Eddy BreakUp (LEBU) devices, thin plates or airfoils mounted in the outer part of turbulent boundary layers, were shown to be able to change the turbulent structure and intermittency as well as reduce turbulent skin friction. In some wind-tunnel studies it was also claimed that a net drag reduction was obtained, i.e. the reduction in skin-friction drag was larger than the drag on the devices. However, towing-tank experiments with a flat plate at high Reynolds numbers as well as with an axisymmetric body showed no net reduction, but instead an increase in total drag. Recent large-eddy simulations have explored the effect of LEBUs on the turbulent boundary layer and evaluations of the total drag show similar results as in the towing tank experiments. Despite these negative results in terms of net drag reduction, LEBUs manipulate the boundary layer in an interesting way which explains why they still attract some interest. The reason for the positive results in the wind-tunnel studies as compared to drag measurements are discussed here, although no definite answer for the differences can be given. • 45. KTH, School of Engineering Sciences (SCI), Mechanics, Fluid Physics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Mechanics, Fluid Physics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. The diagnostic plot - a litmus test for wall bounded turbulence data2010In: European journal of mechanics. B, Fluids, ISSN 0997-7546, E-ISSN 1873-7390, Vol. 29, no 6, p. 403-406Article in journal (Refereed) A diagnostic plot is suggested that can be used to judge wall bounded turbulence data of the mean and the rms of the streamwise velocity in terms of reliability both near the wall, around the maximum in the rms as well as in the outer region. The important feature of the diagnostic plot is that neither the wall position nor the friction velocity needs to be known, since it shows the rms value as a function of the streamwise mean velocity, both normalized with the free stream velocity. One must remember, however, that passing the test is a necessary, but not sufficient condition to prove good data quality. • 46. KTH, School of Engineering Sciences (SCI), Mechanics, Fluid Physics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Mechanics, Fluid Physics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. The diagnostic plot: a new way to appraise turbulent boundary-layer data2009In: ADVANCES IN TURBULENCE XII: PROCEEDINGS OF THE 12TH EUROMECH EUROPEAN TURBULENCE CONFERENCE / [ed] Eckhardt, B., 2009, Vol. 132, p. 609-612Conference paper (Refereed) • 47. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. The viscous sublayer revisited-exploiting self-similarity to determine the wall position and friction velocity2011In: Experiments in Fluids, ISSN 0723-4864, E-ISSN 1432-1114, Vol. 51, no 1, p. 271-280Article in journal (Refereed) In experiments using hot wires near the wall, it is well known that wall interference effects between the hot wire and the wall give rise to errors, and mean velocity data from the viscous sublayer can usually not be used to determine the wall position, nor the friction velocity from the linear velocity distribution. Here, we introduce a new method that takes advantage of the similarity of the probability density distributions (PDF) or rather the cumulative distribution functions (CDF) in the near-wall region. By using the velocity data in the CDF in a novel way, it is possible to circumvent the problem associated with heat transfer to the wall and to accurately determine both the wall position and the friction velocity. Prior to its exploitation, the self-similarity of the distribution functions of the streamwise velocity fluctuations within the viscous sublayer is established, and it is shown that they can accurately be described by a lognormal distribution. • 48. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. A new formulation for the streamwise turbulence intensity distribution2011In: 13th European Turbulence Conference (ETC13): Wall-Bounded Flows And Control Of Turbulence, Institute of Physics Publishing (IOPP), 2011, p. 022002-Conference paper (Refereed) Numerical and experimental data from zero pressure-gradient turbulent boundary layers over smooth walls have been analyzed by means of the so called diagnostic plot introduced by Alfredsson & Orlu [Eur. J. Fluid Mech. B/Fluids, 4 2, 403 (2010)]. In the diagnostic plot the local turbulence intensity is shown as a function of the local mean velocity normalized with a reference velocity scale. In the outer region of the boundary layer a universal linear decay of the turbulence intensity is observed independent of Reynolds number. The deviation from this linear region appears in the buffer region and seems to be universal when normalized with the friction velocity. Therefore, a new empirical fit for the streamwise velocity turbulence intensity distribution is proposed and the results are compared with up to date reliable high-Reynolds number experiments and extrapolated towards Reynolds numbers relevant to atmospherical boundary layers. • 49. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. A new formulation for the streamwise turbulence intensity distribution in wall-bounded turbulent flows2012In: European journal of mechanics. B, Fluids, ISSN 0997-7546, E-ISSN 1873-7390, Vol. 36, p. 167-175Article in journal (Refereed) The distribution of the streamwise velocity turbulence intensity has recently been discussed in several papers both from the viewpoint of new experimental results as well as attempts to model its behavior. In the present paper numerical and experimental data from zero pressure-gradient turbulent boundary layers, channel and pipe flows over smooth walls have been analyzed by means of the so called diagnostic plot introduced by Alfredsson & ÖrlÃŒ [P.H. Alfredsson, R. ÖrlÃŒ, The diagnostic plot-a litmus test for wall bounded turbulence data, Eur. J. Mech. B Fluids 29 (2010) 403-406]. In the diagnostic plot the local turbulence intensity is plotted as function of the local mean velocity normalized with a reference velocity scale. Alfredsson et al. [P.H. Alfredsson, A. Segalini, R. ÖrlÃŒ, A new scaling for the streamwise turbulence intensity in wall-bounded turbulent flows and what it tells us about the outer peak, Phys. Fluids 23 (2011) 041702] observed that in the outer region of the boundary layer a universal linear decay of the turbulence intensity independent of the Reynolds number exists. This approach has been generalized for channel and pipe flows as well, and it has been found that the deviation from the previously established linear region appears at a given wall distance in viscous units (around 120) for all three canonical flows. Based on these results, new empirical fits for the streamwise velocity turbulence intensity distribution of each canonical flow are proposed. Coupled with a mean streamwise velocity profile description the model provides a composite profile for the streamwise variance profile that agrees nicely with existing numerical and experimental data. Extrapolation of the proposed scaling to high Reynolds numbers predicts the emergence of a second peak of the streamwise variance profile that at even higher Reynolds numbers overtakes the inner one. • 50. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, Centres, SeRC - Swedish e-Science Research Centre. Kufa Univ, Coll Engn, Al Najaf, Iraq.. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, Centres, SeRC - Swedish e-Science Research Centre. KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, Centres, SeRC - Swedish e-Science Research Centre. Ohio Univ, Dept Mech Engn, Athens, OH 45701 USA.. Interface-resolved simulations of particle suspensions in Newtonian, shear thinning and shear thickening carrier fluids2018In: Journal of Fluid Mechanics, ISSN 0022-1120, E-ISSN 1469-7645, Vol. 852, p. 329-357Article in journal (Refereed) We present a numerical study of non-colloidal spherical and rigid particles suspended in Newtonian, shear thinning and shear thickening fluids employing an immersed boundary method. We consider a linear Couette configuration to explore a wide range of solid volume fractions (0.1 <= Phi <= 0.4) and particle Reynolds numbers (0.1 <= Re<INF>p</INF><INF></INF> <= 10). We report the distribution of solid and fluid phase velocity and solid volume fraction and show that close to the boundaries inertial effects result in a significant slip velocity between the solid and fluid phase. The local solid volume fraction profiles indicate particle layering close to the walls, which increases with the nominal Phi. This feature is associated with the confinement effects. We calculate the probability density function of local strain rates and compare the latter's mean value with the values estimated from the homogenisation theory of Chateau et al. (J. Rheol., vol. 52, 2008, pp. 489-506), indicating a reasonable agreement in the Stokesian regime. Both the mean value and standard deviation of the local strain rates increase primarily with the solid volume fraction and secondarily with the Re<INF>p</INF>. The wide spectrum of the local shear rate and its dependency on Phi and Re<INF>p</INF> point to the deficiencies of the mean value of the local shear rates in estimating the rheology of these non-colloidal complex suspensions. Finally, we show that in the presence of inertia, the effective viscosity of these non-colloidal suspensions deviates from that of Stokesian suspensions. We discuss how inertia affects the microstructure and provide a scaling argument to give a closure for the suspension shear stress for both Newtonian and power-law suspending fluids. The stress closure is valid for moderate particle Reynolds numbers, O(Re<INF>p</INF>) similar to 10. 1234567 1 - 50 of 2356 Cite Citation style • apa • harvard1 • ieee • modern-language-association-8th-edition • vancouver • Other style More styles Language • de-DE • en-GB • en-US • fi-FI • nn-NO • nn-NB • sv-SE • Other locale More languages Output format • html • text • asciidoc • rtf
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7428122758865356, "perplexity": 2341.813005281664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315750.62/warc/CC-MAIN-20190821022901-20190821044901-00020.warc.gz"}
https://flow3d.co.kr/hybrid-modeling-on-3d-hydraulic-features-of-a-step-pool-unit/figure-7-boxplots-for-the-distributions-of-the-mass-averaged-flow-kinetic-energy/
Figure 7: Boxplots for the distributions of the mass-averaged flow kinetic energy (KE, panels a-f), turbulence kinetic energy (TKE, panels g-l), and turbulent dissipation (εT, panels m-r) in the pool for all the six tested discharges (the plots at the same discharge are in the same row). The mass-averaged values were calculated every 2 cm in the streamwise direction. The flow direction is from left to right in all the plots. The general locations of the contraction section for all the flow rates are marked by the dashed lines, except for Q = 5 L/s when the jump is located too close to the step. The longitudinal distance taken up by negative slope in the pool for the inspected range is shown by shaded area in each plot.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8012266159057617, "perplexity": 1607.3823536699192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662525507.54/warc/CC-MAIN-20220519042059-20220519072059-00405.warc.gz"}
https://chemistry.stackexchange.com/questions/112776/why-is-there-a-lone-pair-in-thionyl-fluoride
# Why is there a lone pair in thionyl fluoride? Why is there a lone pair in $$\ce{SOF2}$$? I drew its structure, which according to me should look like this: Why is there a lone pair on sulfur? Isn't its octet complete? If yes, why should it expand its octet and gain more electrons? • How many valence electrons did S have in the first place? Apr 15 '19 at 5:25 • @Ivan Oh you're right! S has 6. Apr 15 '19 at 5:47 • Welcome to Chemistry.SE! Take the tour to get familiar with this site. Mathematical expressions and equations can be formatted using $\LaTeX$ syntax. I have updated your post with chemistry markup. If you want to know more, please have a look here and here. We prefer to not use MathJax in the title field, see here for details. Apr 15 '19 at 13:37 When you want to write a Lewis structure, I suggest you start from considering how many valence electrons each atom has. In your case, you would have: S = 6 F = 7; total = 14 O = 6 The final total would be 14 + 6 + 6 = 26. The structure you guessed for is correct, sulfur is the central atom. You start by drawing a single bond per each atom, getting something like this: F | S–O | F Then, you can draw a double bond for oxygen: F | S=O | F If you do the math, by counting the electrons that you put in the previous structure, you would have 2 + 2 + 4 (two S-F bonds and a S=O) = 8. Then we will add in the lone pairs: three on each halogen, two for oxygen, and one for sulfur. Therefore we add 3 × 2 × 2 = 12 electrons (three lone pairs on each fluorine) plus 4 electrons on O (the two lone pairs), and we have a total of 24. The final 2 , adding up to 26, come from the lone pair on sulfur. This makes sense because sulfur is in period 3, so it is possible for it to have more than 8 electrons: in other words, the octet rule applies not only for 8 electrons, if that makes sense. • A much, much better description would use partial charges at the sulfur and oxygen and a single bond instead. A double bond would require d-orbitals from sulfur, a theory that hat been disproved. This has been discussed on our platform a couple of times. In any case, following the octet rule is usually safer than expecting 'octet expansion' (because the latter is wrong). Apr 15 '19 at 13:44 • You are right, thank you for catching that! Do you have any references about what you said? :) – Pier Apr 15 '19 at 20:03 • See for example chemistry.stackexchange.com/questions/29101/… Apr 16 '19 at 20:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.556942343711853, "perplexity": 746.8542725623274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304798.1/warc/CC-MAIN-20220125070039-20220125100039-00001.warc.gz"}
http://divisbyzero.com/page/4/
Posted by: Dave Richeson | May 11, 2011 ## What shape are the golden arches? Every day for lunch I eat salad (made with vegetables from our local farmers’ market or from our college’s organic farm) and homemade yogurt and granola. The only time I ever eat fast food is on long car trips. So why, I ask you, did the question “What shape are the golden arches?” pop into my head? I have no idea. But once it did, I just had to investigate. A quick internet search was inconclusive. Commenters on discussion forums assert that they are a pair of parabolas or a pair of catenary curves. But the credibility of the sources is questionable. So I thought I’d see what I could determine using Geogebra. It turns out that the arches are definitely not parabolas (I didn’t think they were). The catenary is a good fit, but it still isn’t quite perfect. The best fit is an ellipse (or part of an ellipse)! Check out the applet that I made, and see for yourself. Posted by: Dave Richeson | May 4, 2011 ## Auden: minus times minus equals plus, the reason for this we need not discuss I stumbled upon this quote by W. H. Auden (from A Certain World: A Commonplace Book, 1970). Of course, the natural sciences are just as “humane” as letters. There are, however, two languages, the spoken verbal language of literature, and the written sign language of mathematics, which is the language of science. This puts the scientist at a great advantage, for, since like all of us, he has learned to read and write, he can understand a poem or a novel, whereas there are very few men of letters who can understand a scientific paper once they come to the mathematical parts. When I was a boy, we were taught the literary languages, like Latin and Greek, extremely well, but mathematics atrociously badly. Beginning with the multiplication table, we learned a series of operations by rote which, if remembered correctly, gave the “right” answer, but about any basic principles, like the concept of number, we were told nothing. Typical of the teaching methods then in vogue is this mnemonic which I had to learn. Minus times Minus equals Plus: The reason for this we need not discuss. Posted by: Dave Richeson | April 27, 2011 ## A pyramidologist’s value for pi Recently I came across two theories about the design of Great Pyramid of Giza. • If we construct a circle with the altitude of the pyramid as its radius, then the circumference of the circle is equal to the perimeter of the base of the pyramid. Said another way, if we build a hemisphere with the same height as the pyramid, then the equator has the same length as the perimeter of the pyramid. • Each face of the pyramid has the same area as the square of the altitude of the pyramid. Apparently these are favorite mathematical facts (especially the first one) for pyramidologists who look for mathematical relations in the measurement of the pyramids that help justify their cultish belief in the mystical power of the pyramids. Of course we should separate the mathematical properties of the pyramids that may have been legitimate design decisions by the architects, from the crazy meanings that are often attached to them. I have no training in the history of Egyptian mathematics or in the history of the pyramids, so I can’t really assess their likelihood of being true (my guess: the first one is an amazing coincidence, the second is more likely to be intentional). However, one interesting fact is that if the first one was intentional, then they were using the value 3.143 for pi, which is significantly better than the value found in the Egyptian Rhind papyrus (3.16), which was written 600-800 years after the construction of the pyramids. Just for fun, here are a few mathematical exercises: 1. Check these facts using the actual measurements of the pyramid (you can take altitude to be 146.6 meters and the length of one side of the pyramid to be 230.4 meters). They are indeed remarkably close! 2. Assume that the first one is true. Use the measurements given in (1) to show that the architects were using the value 3.143 for pi. 3. Assume that we have a pyramid for which both of these facts are true. Show that this would imply that $\pi=2\sqrt{2\sqrt{5}-2}=3.1446\ldots$ Has anyone seen this approximation for pi before? I didn’t find it after performing a quick search of the internet. [Update, another way of writing this approximation is $4\sqrt{1/\varphi}$, where $\varphi$ is the golden ratio.] [The photograph of the Pyramid of Giza is from Wikipedia.] Posted by: Dave Richeson | April 26, 2011 ## What do you want on your tombstone? I’ve come across a few mathematicians or scientists who have been so proud of their scholarly achievements that they’ve asked for them to be put on their headstone when they die (or have had their achievements placed on their headstones by someone else). Please let me know if you know of others. [Update: thanks to folks on Twitter I learned of a few more. I've added them to the list.] Archimedes—sphere/cylinder Archimedes’s mathematical accomplishments are numerous. But he requested that his tombstone display a sphere inscribed in a cylinder with the ratio 3:2. He was proud of his discovery that the ratio of the volume of the cylinder to the sphere and the ratio of the surface area of the cylinder (including the top and bottom) to the sphere are both 3:2. (He not only discovered the volume and surface area formulas for the sphere, but also showed that the same constant, pi, appeared in these formulas and the formulas for the circle.) The Greeks knew how to inscribe in a circle, using only a straightedge and compass, an equilateral triangle (3-gon), a square (4-gon), a regular pentagon (5-gon), a regular pentadecagon (15-gon), and any $(2^k)$-gon, $(2^k\cdot 3)$-gon, $(2^k\cdot 5)$-gon, and $(2^k\cdot 15)$-gon. That’s it. Then, 2000 years later, the 18 year old Gauss showed that it was possible to do the same with a 17-gon (and later certain other regular polygons). He was so proud of this discovery that he decided to pursue a career in mathematics.  He also asked that a 17-gon be inscribed on his tombstone. His wish was not honored, but it was later inscribed on a memorial in his honor in his home town of Brunswick. (If anyone knows where I can find a photo of it, please link to it in the comments.) [Update:] a 17-pointed star was inscribed on a memorial erected in his honor in his home town of Brunswick. In the photo below you can (barely) see the star under Gauss’s right foot. Here is a closeup. Ludolph Van Ceulen—the first 35 digits of π Ludolph Van Ceulen spent most of his life computing the first 35 digits of pi. He used Archimedes’ technique and polygons of $2^{62}$ sides! His tombstone contained his upper and lower bounds for pi. The original tombstone disappeared some time around 1800; a replica is shown below. Bernoulli was so enamored with the logarithmic spiral that he wanted it engraved on his headstone. However, the engraver accidentally carved an Archimedean spiral. The tombstone of physicist Ludwig Boltzmann contains his entropy formula $S=k \log W$. Paul Dirac—the Dirac equation This burial plaque can be found in Westminster Abbey, not far from Isaac Newton’s resting place. It contains Dirac’s relativistic electron equation, $i\gamma\cdot\partial\psi=m\psi$. Ferdinand von Lindemann—circle, square, pi In 1882 Lindemann proved that pi is a transcendental number. This put to rest the 2000+ year old question of whether it is possible to “square the circle“—i.e., construct, using only a compass and straightedge, a square having the same area as a given circle. (He proved that it was impossible.) His grave has a circle superimposed on a square, surrounding the symbol pi. [Update: Here's a memorial for Lindemann that also has the circle/square/pi in it. It is in the city of his birth, Hanover. Thanks to Pat Ballew for the image!] Henry Perigal—proof of the Pythagorean theorem Henry Perigal was an amateur mathematician who discovered the “dissection proof” of the Pythagorean theorem. The proof can now by found carved into his headstone. Alfred Clebschhis grave says “Mathematiker” on it Posted by: Dave Richeson | April 15, 2011 ## Happy birthday Uncle Leonhard, I hope you enjoy your new home On today, Leonhard Euler’s 304th birthday, we find that the Euler Archive has a new home! This labor of love, created and run by Dominic Klyve, Lee Stemkoski, and Erik Tou, houses thousands of pages of Euler’s original works as well as a growing number of translations of Euler’s works. The site had been located at Dartmouth College, where the trio attended graduate school. But it has now move to the servers of the MAA: http://eulerarchive.maa.org/ Check it out, and if you are interested and able, translate one of Euler’s articles (that’s what I did…with help). (By the way, to celebrate, the MAA is selling their five books on Euler at the discounted price of $20.) Posted by: Dave Richeson | April 14, 2011 ## Math books for young children I have a child in first grade and another who will be in elementary school in a couple years. So I’m on the lookout for good children’s books about mathematics. Below is a collection of books that I’ve read or that have been recommended to me. (I got some of these suggestions from people on Twitter.) I’d really appreciate it if you would add your own suggestions in the comments (if you want to give age-ranges, descriptions, or links, that would be great too). I’ll add more to the list as I find them. Again, I’d say that the primary focus would be books for kids ages 5-12. Thanks! Posted by: Dave Richeson | March 22, 2011 ## Albrecht Dürer’s ruler and compass constructions Albrecht Dürer (1471–1528) is a famous Renaissance artist. Mathematicians probably know him best for his work Melencolia I which contains a magic square, a mysterious polyhedron, a compass, etc. Today I was reading his book Underweysung der Messung mit dem Zirckel und Richtscheyt (The Painter’s Manual: A manual of measurement of lines, areas, and solids by means of compass and ruler). It was published in 1525 and was reprinted posthumously in 1538 with additional material. Our library has a nice English translation of the first edition by Walter L. Strauss (1977). The book has scans of the old German on the left and the English translations on the right. (You can see scans of the original 1538 book online—see pp. 110–119 for this material.) Dürer wrote this book as a technical manual for artists, craftsmen, etc. For example, it gave elementary instructions for how to draw regular polygons with a ruler and compass. In Dürer’s time the old works of the Greeks were just becoming available again. Dürer apparently had access to them. In fact, the book begins, “The most sagacious of men, Euclid, has assembled the foundation of geometry. Those who understand him well can dispense with what follows here.” This is an odd beginning because, although he gives some Euclidean constructions, many are new; moreover, he must have known that the vast majority of the readers knew no Euclid. Because this is a technical manual, Dürer’s emphasis is on easy-to-draw constructions that look good—in particular, some are just good looking approximations. Many of them were known to craftsmen of the day (techniques passed down through the generations) or appeared in print earlier, but some may have been discovered by Dürer. I was led to the book because I wanted to see his construction of the regular pentagon. He gives two constructions. One is a classical Greek construction, but it is the second that I wanted to see. This construction has two notable features. One is that it is drawn using a rusty compass—that is, the compass is set to one opening for the entire construction. This was probably a great bonus to the artisan; I imagine that not having to continually adjust the compass made the construction fast and accurate. The second interesting fact is that it is only approximately a regular pentagon—but it is a very good approximation. By my calculations, the error in the height of the pentagon is less than 1%. The construction is shown below. You begin with the line segment labeled ${ab}$ and you set the compass to this radius. I’ll leave it as an exercise to the reader to reverse engineer the construction. Dürer never says that this is an approximation. I found what I was looking for. But then I found more. Dürer goes on to give ruler and compass constructions of regular polygons with sides numbering 3, 4, 5, 6, 7, 8, 9, 11, and 13. As you may know, some of these are impossible constructions (7, 9, 11, and 13). Hence Dürer’s constructions must be approximations. The heptagon (7-gon) and the nonagon (9-gon) are excellent approximations. So I though I’d share them with you. Durer’s heptagon is remarkably easy to construct. Begin with an equilateral triangle inscribed in a circle. Then half of the side of the triangle is nearly the same length as the side of the inscribed heptagon. So all we must do is bisect one side, and sweep an arc to obtain the first side of the heptagon. Then use this side to draw the rest. The construction of the nonagon is a little more involved. Draw a circle. Then with the same opening of the compass draw three “fish-bladders” (as he calls them)—to do this you need the centers on the vertices of an inscribed equilateral triangle. Draw a radial segment inside one fish-bladder and divide it into thirds. Draw a perpendicular line at the 1/3 mark. It intersects the bladder in two points (${e}$ and ${f}$ in Dürer’s diagram). Draw a circle with the same center as the large circle, passing through ${e}$ and ${f}$. Then ${ef}$ is one side of an (approximate) nonagon inscribed in the smaller circle. As with the pentagon, Dürer does not mention that the heptagon and the nonagon are approximations. However, he admits that the constructions of the 11-gon and the 13-gon (which I will omit) are “mechanical [approximate] and not demonstrative.” As if that is not cool enough, Dürer tackles the famously impossible angle trisection and circle squaring problems. To see his (approximate) angle trisection solution we begin with an arc of a circle and the corresponding chord. (He actually trisects the arc, but that is equivalent to trisecting the central angle.) He begins by trisecting the chord. The points in his diagram are, in order left-to-right, ${a}$, ${c}$, ${d}$, and ${b}$. Draw perpendicular lines from ${c}$ and ${d}$ to the arc, then swing arcs from these points (with centers ${a}$ and ${b}$) down to the chord. These new points are ${j}$ and ${k}$. Trisect the segments ${cj}$ and ${kd}$. Using the points closest to ${j}$ and ${k}$, sweep arcs back up to the original arc of the circle to obtain ${l}$ and ${m}$. Then arcs ${al}$, ${~lm}$, and ${mb}$ are approximately equal (he does not admit to this being an approximation). Finally we turn to his circle squaring. It is very crude. He writes “The quadratura circui, which means squaring the circle so that both square and circle have the same surface area, has not been demonstrated by scholars. But it can be done approximately for minor applications or small areas in the following manner.” In the first edition of the book he uses an approximation of ${\pi\approx 3\frac{1}{8}}$, and in the second he uses ${\pi\approx 3\frac{1}{7}}$. He writes simply, “Draw a square and divide its diagonal into ten parts and then draw a circle with a diameter of eight of these parts.” What a great find and an enjoyable read! By the way, I encourage you to try performing these constructions. I did so using Geogebra and was amazed by the resulting figures. Posted by: Dave Richeson | March 1, 2011 ## A picture of frustration: Sam Loyd’s 15 puzzle Mathematics, whether it be calculus homework or cutting-edge research, can be very challenging. Haven’t we all faced a problem that we struggle with for hours or days? The answer, we know, or we hope, is within our grasp—but we just can’t reach it. In moments like that I always think of this picture from the famous puzzle-master Sam Loyd (who I wrote about once before). It can be found in Sam Loyd’s 1914 Cyclopedia of Puzzles (scans of the entire book are available online). Isn’t it great? The picture shows a farmer neglecting his fields while trying in vain to solve the famous sliding block puzzle (also known as the 15-puzzle). In this case, the puzzle is set to Loyd’s starting configuration—with the 14 and 15 switched. Loyd offered$1000 for the first correct solution of this puzzle. He wrote: People became infatuated with the puzzle and ludicrous tales are told of shopkeepers who neglected to open their stores; of a distinguished clergyman who stood under a street lamp all through a wintry night trying to recall the way he had performed the feat… Pilots are said to have wrecked their ships, engineers rush their trains past stations and businessmen became demoralized… Farmers are known to have deserted their plows and I have taken one of such instances as an illustration for the sketch. Unfortunately mathematical research is often too much like this picture—Loyd never paid out the \$1000 because his 15-puzzle is impossible to solve. The proof of impossibility is a nice application of group theory. Johnson and Story gave the first proof in 1879 (you can find their article here, although it may require a password; see this article for a more modern treatment). Incidentally, Sam Loyd insisted until his death in 1911 that he invented the puzzle. However, in a recent book (that has a whopping 3 subtitles!), The 15 Puzzle: How It Drove the World Crazy; The Puzzle That Started the Craze of 1880; How America’s Greatest Puzzle Designer, Sam Loyd, Fooled Everyone for 115 Years, Jerry Slocum and Dic Sonneveld show that this was another instance of Loyd’s trickery and deception. They investigated the the origin of the puzzle and discovered that Loyd was not the inventor. The puzzle was invented around 1874 by Noyes Palmer Chapman, a postmaster from Canastota, New York. Posted by: Dave Richeson | February 19, 2011 ## Millay’s Euclid looks on Beauty bare I had forgotten about this poem by Edna St. Vincent Millay until I stumbled upon it again today. I thought you all would like it. Euclid alone has looked on Beauty bare. Let all who prate of Beauty hold their peace, And lay them prone upon the earth and cease To ponder on themselves, the while they stare At nothing, intricately drawn nowhere In shapes of shifting lineage; let geese Gabble and hiss, but heroes seek release From dusty bondage into luminous air. O blinding hour, O holy, terrible day, When first the shaft into his vision shone Of light anatomized! Euclid alone Has looked on Beauty bare. Fortunate they Who, though once only and then but far away, Have heard her massive sandal set on stone. —Edna St. Vincent Millay (1922) Posted by: Dave Richeson | February 18, 2011 ## Lincoln and squaring the circle I’d heard a long time ago that Abraham Lincoln was a largely self-taught man and that he read Euclid’s Elements on his own. Right now I’m reading Doris Kearns Goodwin’s Team of Rivals: The Political Genius of Abraham Lincoln, and from it I learned that not only did he read Euclid, he spent some time trying to square the circle. Today we think of circle squarers as mathematical cranks. But remember that this was in the 1850′s, more than two decades before Lindemann’s proof that $\pi$ is transcendental—the result which proved conclusively that it is impossible to square the circle. Here’s the relevant passage: During his nights and weekends on the circuit, in the absence of domestic interruptions, [Lincoln] taught himself geometry, carefully working out propositions and theorems until he could proudly claim that he had “nearly mastered the Six-books of Euclid.” His first law partner, John Stuart, recalled that “he read hard works—was philosophical—logical—mathematical—never read generally.” Herndon describes finding him one day “so deeply absorbed in study he scarcely looked up when I entered.” Surrounded by “a quantity of blank paper, large heavy sheets, a compass, a rule, numerous pencils, several bottles of ink of various colors, and a profusion of stationery,” Lincoln was apparently “struggling with a calculation of some magnitude, for scattered about were sheet after sheet of paper covered with an unusual array of figures.” When Herndon inquired what he was doing, he announced “that he was trying to solve the difficult problem of squaring the circle.” To this insoluble task posed by the ancients over four thousand years earlier, he devoted “the better part of the succeeding two days… almost to the point of exhaustion.” Pretty cool! Aside: There are two mathematical oddities here. First of all, it is strange that they mention the six books of Euclid, rather than the thirteen books. [Update: Now that I think about it, the first six books are the ones covering plane geometry. Book 7 is where the number theory begins. Then the end of Elements covers solid geometry.] Second, I’m curious to know where the author got the figure “over four thousand years ago” for the origin of the circle squaring problem. If the origin of the problem is marked by the first approximation of $\pi$, then that’s not a terrible exageration (as far as I am aware, the earliest known approximation is found in the Egyptian Rhind papyrus, which dates back to roughly 1650 BCE). But if we mean the classical problem (Is it possible to create a square with the same area as a given circle using only a straightedge and compass?), then it is a much younger problem than she asserts. Second aside: Lincoln was not the only mathematically-inclined president. For example, James Garfield discovered a new proof of the Pythagorean theorem.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 39, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7074661254882812, "perplexity": 1593.0766463278064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164034983/warc/CC-MAIN-20131204133354-00044-ip-10-33-133-15.ec2.internal.warc.gz"}
http://occasionallycogent.com/
Text Computers are Ridiculous Recently, I came across a neat script from Dr.Drang for making quick graphs from the command-line. I thought this would be a nice thing to have around, since it would save me having to look up all the options to make gnuplot look decent or remembering how exactly matplotlib works. I downloaded the script, tried to run it, and the comedy of errors began: ImportError: dlopen(.../matplotlib/_png.so, 2): Library not loaded: /usr/local/lib/libpng15.15.dylib Referenced from: .../matplotlib/_png.so Reason: image not found How vexing, apparently matplotlib wants a different version of libpng than I have installed. Luckily, I use [homebrew][brew] to install this sort of thing, so with only a little bit of searching, I was able to find this Stack Overflow answer that showed exactly how to get the desired version of the library I wanted. Being the hubristic fool that I am though, I assumed that since I wasn’t seeing the error that the questioner there was with regards to freetype, I would be fine to only do the first part of the fix and only get the old version of libpng and not bother with getting the specific version of freetype. Of course, after performing the lengthy rebuild of matplotlib, now I was getting the freetype error, so I had to do the second step anyway. I avoided having to go through the lengthy re-build again though…because now pip was caching the compilation. While this did make it go very quickly, it also meant that it wasn’t bothering to try to link things again, so it was still trying to use the wrong version of the library and dying. After cursing, repeatedly re-building, and trying in vain to discover if pip had some way to tell it to clean out its cache, I eventually was able to discover that the temporary build files were being stored in the intuitive location of /var/folders/4q/v5431hj953g940y4zy9jvg5c0000gn/T/pip-build-james. Muttering only the mildest of invectives, I deleted the directory, rebuilt matplotlib, and no longer saw the freetype error! Hooray! Instead, I got an error that my version of python wasn’t built as a framework. This, of course, was because I’ve been using pythonbrew (which is now deprecated) to provide python, so that I don’t need to worry about my particular version of python being overwritten by OS upgrades. There is a flag one can give pythonbrew to make it build as a framework to avoid exactly this issue, but I had neglected to provide it when I first set it up, many moons ago. Since, as best I can determine, there’s no way to make pythonbrew just build the framework, I had to uninstall my current version of python and re-install it, this time with the --framework flag. This took a while, but it succeeded, and I was finally able to run the script! Except now vim, my text editor, was broken. At this point I was becoming a little more annoyed. Making graphs once in a while would be nice, but I use vim not only for all of my work, but for composing emails1 and basically anything that entails composing more than a single line of text. Now though, any time I tried to start vim, I was rewarded with nothing but the terse error ImportError: No module named site. My first thought was that since I had re-installed python, all the packages had been nuked, so I just needed to re-install said packages. However, there exists no package named site. As best I can determine site is some sort of magical meta-package that should just always exist. After some fruitless Googling DuckDuckGo-ing, I guessed that it was because vim was linked against some particular version of python and that having the pythonbrewed version installed as a framework was causing it some distress. Suppressing the terrifying flashbacks to the last time I tried to do anything related to vim linking with python2, I decided to try just re-installing vim from homebrew, hoping it would pick up the new environment. Girding my loins, I unlinked my current version of vim, ran brew install vim to get the new version…and the build exploded. Suppressing panic, trying not to imagine a future where I wrote code using a quill and my own blood, I tried again. And, for reasons I completely fail to understand, it worked. Which frankly, makes me even more uncomfortable. When something unexpectedly fails, you can debug and figure out what’s going on. When something unexpectedly succeeds, all you can do is hope that you never lose the favour of the good faeries in the computer. So, in summary: Computers are ridiculous, making the smallest change to anything will break everything, and success is more troubling than failure. But at least I can make these sweet graphs! 1. In fact, while in the throes of dealing with this, I got a time-sensitive email from my step-mother that I couldn’t respond to, since vim would crash every time mutt tried to compose a response, forcing me to resort to sending a text message, like an animal. 2. A long and sordid tale, entailing building vim from source; many, many segfaults; weeping; and no real resolution Text A Little Update I’ve been fairly quiet online for the last year or so, as I’ve been fairly heads-down working on quite a few different things. Penyo Pal went through a few iterations and my new startup has been doing a number of different things. Most of what I’ve been doing is closed-source for now, but we’ve been wanting to start open-sourcing our stuff. To help start that process, I decided to try to compile a list of the stuff I’ve built over the last little while that we’ve released. From Penyo Pal, we forked and modified two different game frameworks: Moai for the original version of the game & currently for Private Eye and Dance Party, then Ejecta for the current version of the main app. • Ejecta (fork): Now lagging behind the main repo by quite a bit, but we’ve added a number of helpers that we needed for our purposes. • moai has two different branches we used on the previous versions of Penyo Pal, both of which are very far behind From Lean Pixel, where most of our work has been Clojure-based. • inliner: A clojure library for inlining CSS in HTML emails. • lein-lesscss (fork): A fork of the leiningen lesscss plugin to update the version of the LESS compiler & to let it run continuously in “auto” mode • www_fdw (fork): A fork of the Postgres www foreign data wrapper to fix some bugs we found using it • metaslurp: A clojure library to get information about a given URL, primarily using Open Graph, but falling back to heuristics if unavailable Hopefully we’ll be able to open-source some of the big, cool things we’ve been working on soon… Link What I've been working on for the last couple months…» Since before school ended, I’ve been working at this awesome startup. This is pretty much my dream job, getting to write code to do cool stuff all day. Maybe less money than I’d be getting at Google or IBM, but there is something extraordinarily rewarding about the weight of responsibility being one of a tiny team gives you & knowing that one’s actions will have a big impact. (and as one little shameless plug, if you’re interesting in learning Mandarin, you should definitely take a look at PenyoPal! Link You’ve been suddenly sucked into an RPG. What are your stats?» Sean: Lv.61 Dancer. Special trait: rides a unicorn. Colonel Cheru: Lv.76 Nerd. Special trait: can breakdance. Trevor: Lv.24 Clone. Special trait: can breakdance. We shall be a glorious dancing team. Katlynn: Lv.87 Sniper. Special trait: can come back to life. Not bad. Dunno how I’d get to level 87 though my aim’s probably horrible |D Krystina: Lv.64 Gambler. Special trait: never gets sick. This is actually super accurate in that I really…never get sick. Doctors told me I have a mutant gene that fortifies my immune system. Kayzig: Lv.35 Salaryman. Special trait: can sleep anywhere. My face when I’ve been having super trouble sleeping lately to the point where I’ve considered looking for ways to medicate it and this is super appealing. cries docvalentine: Lv.96 Knight. Special trait: radioactive. heysawbones: Lv.78 Salaryman. Special trait: has psychokinetic powers. Oh my god, this is everything I’ve ever wanted. mayeko: Lv.23 Gypsy. Special trait: can control water. I’m a nomad that controls water? So basically southern water tribe. Makes sense? James: Lv.83 Nerd. Special trait: has ESP. Well, that’s eerily accurate (Source: kamalaophelia) Photo He’s fabulous. It helps that he’s nice to look at. what. Livin’ the dream Video So I decided to make a lil’ promo video for the weightlifting team I want to start in the fall. Enjoy :) So pumped! Photo Photo SKULL-SAN IS SO FUCKIN’ DREAMY (Source: fartchan) vidir allows editing of the contents of a directory in a text editor. A slightly eccentric way to remove or rename files, admittedly, but I quite like it. This is super-useful. Back when I used Emacs, I’d use dired-mode’s ability to edit directories when I need to do some fancy renaming stuff, but now that I’m using Vim, this is a life-saver. Photo HELLO EVERYBODY
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31948354840278625, "perplexity": 2971.6769498213102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657121288.75/warc/CC-MAIN-20140914011201-00096-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://stats.libretexts.org/Bookshelves/Introductory_Statistics/Book%3A_Statistics_Using_Technology_(Kozak)/05%3A_Discrete_Probability_Distributions
# 5: Discrete Probability Distributions $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$ • 5.1: Basics of Probability Distributions There are different types of quantitative variables, called discrete or continuous. What is the difference between discrete and continuous data? Discrete data can only take on particular values in a range. Continuous data can take on any value in a range. Discrete data usually arises from counting while continuous data usually arises from measuring. • 5.2: Binomial Probability Distribution The focus of the section was on discrete probability distributions (pdf). To find the pdf for a situation, you usually needed to actually conduct the experiment and collect data. Then you can calculate the experimental probabilities. Normally you cannot calculate the theoretical probabilities instead. However, there are certain types of experiment that allow you to calculate the theoretical probability. One of those types is called a Binomial Experiment. • 5.3: Mean and Standard Deviation of Binomial Distribution If you list all possible values of x in a Binomial distribution, you get the Binomial Probability Distribution (pdf). You can draw a histogram of the pdf and find the mean, variance, and standard deviation of it. This page titled 5: Discrete Probability Distributions is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Kathryn Kozak via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9834972023963928, "perplexity": 426.3239490026697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00272.warc.gz"}
https://cran.rediris.es/web/packages/bbsBayes/readme/README.html
# bbsBayes This README file provides an overview of the functionality that can be accomplished with ‘bbsBayes’. It is intended to provide enough information for users to perform, at the very least, replications of status and trend estimates from the Canadian Wildlife Service and/or United States Geological Survey. However, it provides more in-depth and advanced examples for subsetting data, custom regional summaries, and more. Additional resources: Introductory bbsBayes Workshop Journal Article with worked example (preprint) ## Overview bbsBayes is a package to perform hierarchical Bayesian analysis of North American Breeding Bird Survey (BBS) data. ‘bbsBayes’ will run a full model analysis for one or more species that you choose, or you can take more control and specify how the data should be stratified, prepared for JAGS, or modelled. ## Installation Option 1: Stable release from CRAN (currently v2.3.8.2020) # To install v2.3.8.2020 from CRAN: install.packages("bbsBayes") Option 2: Less-stable development version # To install the development version from GitHub: install.packages("devtools") library(devtools) devtools::install_github("BrandonEdwards/bbsBayes") ## Basic Status and Trend Analyses bbsBayes provides functions for every stage of Breeding Bird Survey data analysis. ### Data Retrieval You can download BBS data by running fetch_bbs_data. This will save it to a package-specific directory on your computer. You must agree to the terms and conditions of the data usage before downloading. You only need run this function once for each annual update of the BBS database. fetch_bbs_data() ### Data Preparation #### Stratification Stratification plays an important role in trend analysis. Use the stratify() function for this job. Set the argument by to stratify by the following options: • bbs_cws – Political region X Bird Conservation region intersection (Canadian Wildlife Service [CWS] method) • bbs_usgs – Political region X Bird Conservation region intersection (United Status Geological Survey [USGS] method) • bcr – Bird Conservation Region only • state – Political Region only • latlong – Degree blocks (1 degree of latitude X 1 degree of longitude) stratified_data <- stratify(by = "bbs_cws") #### Jags Data JAGS models require the data to be sent as a list depending on how the model is set up. prepare_jags_data subsets the stratified data based on species and wrangles relevent data to use for JAGS models. jags_data <- prepare_jags_data(stratified_data, species_to_run = "Barn Swallow", min_max_route_years = 5, model = "gamye", heavy_tailed = T) Note: This can take a very long time to run ### MCMC Once the data has been prepared for JAGS, the model can be run. The following will run MCMC with default number of iterations. Note that this step usually takes a long time (e.g., 6-12 hours, or even days depending on the species, model). If multiple cores are available, the processing time is reduced with the argument parallel = TRUE. mod <- run_model(jags_data = jags_data) Alternatively, you can set how many iterations, burn-in steps, or adapt steps to use, and whether to run chains in parallel jags_mod <- run_model(jags_data = jags_data, n_saved_steps = 1000, n_burnin = 10000, n_chains = 3, n_thin = 10, parallel = FALSE, parameters_to_save = c("n", "n3", "nu", "B.X", "beta.X", "strata", "sdbeta", "sdX"), modules = NULL) The run_model function generates a large list (object jagsUI) that includes the posterior draws, convergence information, data, etc. ### Convergence The run_model() function will send a warning if Gelman-Rubin Rhat cross-chain convergence criterion is > 1.1 for any of the monitored parameters. Re-running the model with a longer burn-in and/or more posterior iterations or greater thinning rates may improve convergence. The seriousness of these convergence failures is something the user must interpret for themselves. In some cases some parameters of the model may not be separately estimable, but if there is no direct inference drawn from those separate parameters, their convergence may not be necessary. If all or the vast majority of the n parameters have converged (e.g., you’re receiving this warning message for other monitored parameters), then inference on population trajectories and trends from the model are reliable. jags_mod$n.eff #shows the effective sample size for each monitored parameter jags_mod$Rhat # shows the Rhat values for each monitored parameter If important monitored parameters have not converged, we recommend inspecting the model diagnostics with the package ggmcmc. install.packages("ggmcmc") S <- ggmcmc::ggs(jags_mod$samples,family = "B.X") #samples object is an mcmc.list object ggmcmc::ggmcmc(S,family = "B.X") ## this will output a pdf with a series of plots useful for assessing convergence. Be warned this function will be overwhelmed if trying to handle all of the n values from a BBS analysis of a broad-ranged species Alternatively, the shinystan package has some wonderful interactive tools for better understanding convergence issues with MCMC output. install.packages("shinystan") my_sso <- shinystan::launch_shinystan(shinystan::as.shinystan(jags_mod$samples, model_name = "My_tricky_model")) bbsBayes also includes a function to help re-start an MCMC chain, so that you avoid having to wait for an additional burn-in period. ### if jags_mod has failed to converge... new_initials <- get_final_values(jags_mod) jags_mod2 <- run_model(jags_data = jags_data, n_saved_steps = 1000, n_burnin = 0, n_chains = 3, n_thin = 10, parallel = FALSE, inits = new_initials, parameters_to_save = c("n", "n3", "nu", "B.X", "beta.X", "strata", "sdbeta", "sdX"), modules = NULL) ## Model Predictions There are a number of tools available to summarize and visualize the posterior predictions from the model. ### Annual Indices of Abundance and Population Trajectories The main monitored parameters are the annual indices of relative abundance within a stratum (i.e., parameters “n[strata,year]”). The time-series of these annual indices form the estimated population trajectories. indices <- generate_indices(jags_mod = jags_mod, jags_data = jags_data) By default, this function generates estimates for the continent (i.e., survey-wide) and for the individual strata. However, the user can also select summaries for composite regions (regions made up of collections of strata), such as countries, provinces/states, Bird Conservation Regions, etc. For display, the posterior medians are used for annual indices (instead of the posterior means) due to the asymetrical distributions caused by the log-linear retransformation. indices <- generate_indices(jags_mod = jags_mod, jags_data = jags_data, regions = c("continental", "national", "prov_state", "stratum")) #also "bcr", "bcr_by_country" Population trends can be calculated from the series of annual indices of abundance. The trends are expressed as geometric mean rates of change (%/year) between two points in time. $$Trend = (\frac {n[Minyear]}{n[Maxyear]})^{(1/(Maxyear-Minyear))}$$ trends <- generate_trends(indices = indices, Min_year = 1970, Max_year = 2018) The generate_trends function returns a dataframe with 1 row for each unit of the region-types requested in the generate_indices function (i.e., 1 for each stratum, 1 continental, etc.). The dataframe has at least 27 columns that report useful information related to each trend, including the start and end year of the trend, lists of included strata, total number of routes, number of strata, mean observed counts, and estimates of the % change in the population between the start and end years. The generate_trends function includes some other arguments that allow the user to adjust the quantiles used to summarize uncertainty (e.g., interquartile range of the trend estiamtes, or the 67% CIs), as well as include additional calculations, such as the probability a population has declined (or increased) by > X%. trends <- generate_trends(indices = indices, Min_year = 1970, Max_year = 2018, prob_decrease = c(0,25,30,50), prob_increase = c(0,33,100)) ## Visualizing Predictions ### Population Trajectories Generate plots of the population trajectories through time. The plot_indices() function produces a list of ggplot figures that can be combined into a single pdf file, or printed to individual devices. tp = plot_indices(indices = indices, species = "Barn Swallow", # pdf(file = "Barn Swallow Trajectories.pdf") # print(tp) # dev.off() print(tp[[1]]) print(tp[[2]]) etc. ### Trend Maps The trends can be mapped to produce strata maps coloured by species population trends. mp = generate_map(trends, select = TRUE, stratify_by = "bbs_cws", species = "Barn Swallow") print(mp) ### Geofacet Trajectories For stratifications that can be compiled by political regions (i.e., bbs_cws, bbs_usgs, or state), the function geofacet_plot will generate a ggplot object that plots the state and province level population trajectories in facets arranged in an approximately geographic arrangement. These plots offer a concise, range-wide summary of a species’ population status and trends. gf <- geofacet_plot(indices_list = indices, select = T, stratify_by = "bbs_cws", multiple = T, trends = trends, slope = F, species = "Barn Swallow") #png("BARS_geofacet.png",width = 1500, height = 750,res = 150) print(gf) #dev.off() ## EXAMPLE - Replicating the CWS status and trend estimates (2018 version onwards) The CWS analysis, as of the 2018 BBS data-version, uses the GAMYE model. It also monitors two estimates of the population trajectory: * one for visualizing the trajectory that includes the annual fluctuations estimated by the year-effects “n” * and another for calculation trends using a trajectory that removes the annual fluctuations around the smooth “n3”. The full script to run the CWS analysis for the 2018 BBS data version is accessible here: https://github.com/AdamCSmithCWS/BBS_Summaries species.eng = "Pacific Wren" stratified_data <- stratify(by = "bbs_cws") #same as USGS but with BCR7 as one stratum and PEI and Nova Scotia combined into one stratum jags_data <- prepare_jags_data(strat_data = stratified_data, species_to_run = species.eng, min_max_route_years = 5, model = "gamye", heavy_tailed = T) #heavy-tailed version of gamye model jags_mod <- run_model(jags_data = jags_data, n_saved_steps = 2000, n_burnin = 10000, n_chains = 3, n_thin = 20, parallel = F, parameters_to_save = c("n","n3","nu","B.X","beta.X","strata","sdbeta","sdX"), modules = NULL) # n and n3 allow for index and trend calculations, other parameters monitored to help assess convergence and for model testing ## EXAMPLE - Replicating (approximately) the earlier USGS status and trend estimates (2011 - 2017 data versions) The USGS analysis, from 2011 through 2017, uses the SLOPE model. Future analyses from the USGS will likely use the first difference model (see, Link et al. 2017 https://doi.org/10.1650/CONDOR-17-1.1) NOTE: the USGS analysis is not run using the bbsBayes package, and so this analysis may not exactly replicate the published version. However, any variations should be very minor. species.eng = "Pacific Wren" stratified_data <- stratify(by = "bbs_usgs") #BCR by province/state/territory intersections jags_data <- prepare_jags_data(strat_data = stratified_data, species_to_run = species.eng, min_max_route_years = 1, model = "slope", heavy_tailed = FALSE) #normal-tailed version of slope model jags_mod <- run_model(jags_data = jags_data, n_saved_steps = 2000, n_burnin = 10000, n_chains = 3, n_thin = 20, parallel = FALSE, track_n = FALSE, parameters_to_save = c("n2"), #more on this alternative annual index below modules = NULL) ## Alternative Models The package has (currently) four status and trend models that differ somewhat in the way they model the time-series of observations. The four model options are slope, gam, gamye, and firstdiff. ### slope The slope option estimates the time series as a log-linear regression with random year-effect terms that allow the trajectory to depart from the smooth regression line. It is the model used by the USGS and CWS to estimate bbs trends since 2011. The basic model was first described in 2002 (Link and Sauer 2002; https://doi.org/10.1890/0012-9658(2002)083[2832:AHAOPC]2.0.CO;2) and its application to the annual status and trend estimates is documented in Sauer and Link (2011; https://doi.org/10.1525/auk.2010.09220) and Smith et al. (2014; https://doi.org/10.22621/cfn.v128i2.1565). #stratified_data <- stratify(by = "bbs_usgs") #jags_data_slope <- prepare_jags_data(stratified_data, # species_to_run = "American Kestrel", # min_max_route_years = 3, # model = "slope") #jags_mod_full_slope <- run_model(jags_data = jags_data) slope_ind <- generate_indices(jags_mod = jags_mod_full_slope, jags_data = jags_data_slope, regions = c("continental")) slope_plot = plot_indices(indices = slope_ind, species = "American Kestrel SLOPE", #png("AMKE_Slope.png", width = 1500,height = 900,res = 150) print(slope_plot) #dev.off() ### gam The gam option models the time series as a semiparametric smooth using a Generalized Additive Model (GAM) structure. See https://github.com/AdamCSmithCWS/Smith_Edwards_GAM_BBS for more information (full publication coming soon) #stratified_data <- stratify(by = "bbs_usgs") #jags_data_gam <- prepare_jags_data(stratified_data, # species_to_run = "American Kestrel", # min_max_route_years = 3, # model = "gam") #jags_mod_full_gam <- run_model(jags_data = jags_data) gam_ind <- generate_indices(jags_mod = jags_mod_full_gam, jags_data = jags_data_gam, regions = c("continental")) gam_plot = plot_indices(indices = gam_ind, species = "American Kestrel GAM", #png("AMKE_gam.png", width = 1500,height = 900,res = 150) print(gam_plot) #dev.off() ### gamye The gamye option includes the semiparametric smooth used in the gam option, but also includes random year-effect terms that track annual fluctuations around the smooth. This is the model that the Canadian Wildlife Service is now using for the annual status and trend estimates. #stratified_data <- stratify(by = "bbs_usgs") #jags_data_gamye <- prepare_jags_data(stratified_data, # species_to_run = "American Kestrel", # min_max_route_years = 3, # model = "gamye") #jags_mod_full_gamye <- run_model(jags_data = jags_data) gamye_ind <- generate_indices(jags_mod = jags_mod_full_gamye, jags_data = jags_data_gamye, regions = c("continental")) gamye_plot = plot_indices(indices = gamye_ind, species = "American Kestrel GAMYE", #png("AMKE_gamye.png", width = 1500,height = 900,res = 150) print(gamye_plot) #dev.off() ### firstdiff The firstdiff option models the time-series as a random-walk from the first year, so that the first-differences of the sequence of year-effects are random effects with mean = 0 and an estimated variance. This model has been described in Link et al. 2017 https://doi.org/10.1650/CONDOR-17-1.1 #stratified_data <- stratify(by = "bbs_usgs") #jags_data_firstdiff <- prepare_jags_data(stratified_data, # species_to_run = "American Kestrel", # min_max_route_years = 3, # model = "firstdiff") #jags_mod_full_firstdiff <- run_model(jags_data = jags_data) firstdiff_ind <- generate_indices(jags_mod = jags_mod_full_firstdiff, jags_data = jags_data_firstdiff, regions = c("continental")) firstdiff_plot = plot_indices(indices = firstdiff_ind, species = "American Kestrel FIRSTDIFF", #png("AMKE_firstdiff.png", width = 1500,height = 900,res = 150) print(firstdiff_plot) #dev.off() ## Alternate extra-Poisson error distributions For all of the models, the BBS counts on a given route and year are modeled as Poisson variables with over-dispersion. The over-dispersion approach used here is to add a count-level random effect that adds extra variance to the unit variance:mean ratio of the Poisson. In the prepare_jags_data function, the user can choose between two distributions to model the extra-Poisson variance: • the default normal distribution (heavy_tailed = FALSE) • an alternative heavy-tailed t-distribution. (heavy_tailed = TRUE) The heavy-tailed version is well supported for many species, particularly species that are sometimes observed in large groups. Note: the heavy-tailed version can require significantly more time to converge (~2-5 fold increase in processing time). #stratified_data <- stratify(by = "bbs_usgs") #jags_data_firstdiff <- prepare_jags_data(stratified_data, # species_to_run = "American Kestrel", # min_max_route_years = 3, # model = "firstdiff", # heavy_tailed = TRUE) #jags_mod_full_firstdiff <- run_model(jags_data = jags_data) In all the models, the default measure of the annual index of abundance (the yearly component of the population trajectory) is the derived parameter “n”. The run_model function monitors n by default, because it is these parameters that form the basis of the estimated population trajectories and trends. ### Alternate retransformations There are two ways of calculating these annual indices for each model. The two approaches differ in the way they calculate the retransformation from the log-scale model parameters to the count-scale predictions. The user can choose using the following arguments in run_model() and generate_indices(). • the default, estimates the mean of the expected counts from the existing combinations of observers and routes in a given stratum and year. This approach retransforms an annual prediction for every observer-route combination in the stratum and then averages across those predictions. mod <- run_model(... , parameters_to_save = "n", ... ) indices <- generate_indices(... , alternate_n = "n", ... ) • the alternative, parameters_to_save = c("n2"), track_n = FALSE is actually the standard approach used in the USGS status and trend estimates. It estimates the the expected count from a new observer-route combination, assuming the distribution of observer-route effects is approximately normal. This approach uses a log-normal retransformation factor that adds half of the estimated variance of observer-route effects to the log-scale prediction for each year and stratum, then retransforms that log-scale prediction to the count-scale. This is the approach described in Sauer and Link (2011; https://doi.org/10.1525/auk.2010.09220). mod <- run_model(... , parameters_to_save = "n2", ... ) indices <- generate_indices(... , alternate_n = "n2", ... ) The default approach parameters_to_save = c("n") slightly underestimates the uncertainty of the annual indices (slightly narrower CI width). However, we have chosen this approach as the default because: • it much more accurately represents the observed mean counts, and so allows for an intuitive interpretation of the annual indices; • it more accurately represents the relative contribution of each stratum to the combined (e.g., continental or national) population trajectory and trends. The alternative n2 approach tends to overestimate the observed mean counts, and that bias varies among strata, which affects each strata’s contribution to the combined regional estimates. • the small underestimate in the uncertainty of the annual indices, does not affect the uncertainty of the trend estimates. ### Decomposing the population trajectories for two of the models For two of the main model types "slope" and "gamye", users can choose two different ways to calculate trajectories and population trends. With these two model types, the population trajectories are composed of two largely independent components, a long-term smooth and the random annual fluctuations around that smooth. Because the two components are largely independent, the population trajectory can be decomposed. The default approach is to include the annual fluctuations around the linear (slope) or GAM-smooth (gamye) components of the trajectories. These trend estimates are more comprehensive in that they include the full estimated trajectory, but they will vary more between subsequent years (e.g., more variability between a 1970-2017 trend and a 1970-2018 trend), because they include the effects of the annual fluctuations. mod <- run_model(... , parameters_to_save = "n", ... ) indices <- generate_indices(... , alternate_n = "n", ... ) An alternative approach is to decompose the full trajectory and to exclude the annual fluctuations around the linear (slope) or smooth (gamye) components. In this case, the predicted trends will be much more stable between subsequent years. For the CWS status and trend analyses, the visualized population trajectories are calculated using the full trajectory, and the trend estimates are derived from the decomposed trajectory using only the smooth component. mod <- run_model(... , parameters_to_save = c("n","n3"), ... ) indices_visualize <- generate_indices(... , alternate_n = "n", ... ) indices_trend_calculation <- generate_indices(... , alternate_n = "n3", ... ) For example, the figure below (produced using a modified version of the standard plotting functions), shows the two kinds of trajectories for Pacific Wren from the 2018 CWS analysis. The light-blue trajectory is the visualized trajectory, including the yearly fluctuations. The orange trajectory is the one used for trend calculations, which includes only the GAM-smooth component. For the kinds of broad-scale status assessments that form the primary use of the published estimates of trends, this decomposition is a particularly useful feature of these two models. #### The figure below provides another example of the benefits of removing the year-effect annual fluctuations when calculating trends. Each point on the graph represents the 10-year trend estimate for Wood Thrush in Canada, ending in a given year (e.g., the points at 2015 represent the species national population trend from 2005-2015). The red and green points are the trend estimates from the default trend estimates derived from the full population trajectories for the gamye and slope models. The Blue points represent the trends calculated using the decomposed trajectory of the gamye model, including only the smooth component. When the annual fluctuations are included (SLOPE and GAMYE including Year Effects), the population trends surpass the IUCN trend-criterion, in some years (e.g., 2011) suggesting that if assessed in those years the species would be listed as Threatened (trend in the orange region). However, a more stable trend estimate from the decomposed trajectory (GAMYE - Smooth only in Blue) shows that the species is probably best thought of as in decline, but not surpassing the Threatened criterion. ## Alternate Measures of Trend and Population Change The generate_trends() function produces much more than just the trend estimates. The default trend calculation is an interval-specific estimate of the geometric mean annual change in the population. $$Trend = (\frac {n[Minyear]}{n[Maxyear]})^{(1/(Maxyear-Minyear))}$$ It relies on a comparison of the annual indices in the first and last years of the trend period to quantify the mean rate of population change. However, it ignores the pattern of change between the two end-points. The user can choose an alternative estimate of change that is calculated by fitting a log-linear slope to the series of all annual indices between the two end-points (e.g., all 11 years in a 10-year trend from 2008-2018). The slope of this line could be expressed as an average annual percent change across the time-period of interest. If working with estimates derived from a model with strong annual fluctuations and for which no decomposition is possible (e.g., “firstdiff” model), this slope-based trend may be a more comprehensive measure of the average population change, that is less dependent on the particular end-point years. These slope trends can be added to the trend output table by setting the slope = TRUE argument in generate_trends(). The standard trends are still calculated, but additional columns are added that include the alternate estimates. NOTE: the generate_map() function can map slope trends as well with the same slope = TRUE argument. #jags_data_firstdiff <- prepare_jags_data(stratified_data, # species_to_run = "American Kestrel", # model = "firstdiff") #jags_mod_full_firstdiff <- run_model(jags_data = jags_data) #firstdiff_ind <- generate_indices(jags_mod = jags_mod_full_firstdiff, # jags_data = jags_data_firstdiff, # regions = c("continental","stratum")) fd_slope_trends_08_18 <- generate_trends(indices = firstdiff_ind, Min_year = 2008, Max_year = 2018, slope = TRUE) generate_map(fd_slope_trends_0.8_18, slope = TRUE, stratify_by = "bbs_usgs") ) ### Percent Change and probability of change The generate_trends() function also produces estimates of the overall percent-change in the population between the first and last years of the trend-period. This calculation is often easier to interpret than an average annual rate of change. These percent change estimates have associated uncertainty bounds, and so can be helpful for deriving statements such as “between 2008 and 2018, the population has declined by 20 percent, but that estimate is relatively uncertain and the true decline may be as little as 2 percent or as much as 50 percent” In addition, the function can optionally calculate the posterior conditional probability that a population has changed by at least a certain amount, using the prob_decrease and prob_increase arguments. These values can be useful for deriving statements such as “our model suggests that there is a 95% probability that the species has increased (i.e., > 0% increase) and a 45 percent probability that the species has increased more than 2-fold (i.e., > 100% increase)” fd_slope_trends_08_18 <- generate_trends(indices = firstdiff_ind, Min_year = 2008, Max_year = 2018, slope = TRUE, prob_increase = c(0,100)) ## Custom regional summaries Yes, you can calculate the trend and trajectories for custom combinations of strata, such as the trends for Eastern and Western populations of Lincoln’s Sparrow. #stratification <- "bbs_cws" #strat_data <- stratify(by = stratification, sample_data = TRUE) #jags_data <- prepare_jags_data(strat_data, # species_to_run = "Lincoln's Sparrow", # model = "gamye") #jags_mod <- run_model(jags_data = jags_data) Assuming the above setup has been run. The user could then generate population trajectories using a customized grouping of the original strata. First extract a dataframe that defines the original strata used in the analysis. st_comp_regions <- get_composite_regions(strata_type = stratification) The add a column to the dataframe that groups the original strata into the desired custom regions. st_comp_regions$East_West <- ifelse(st_comp_regions$bcr %in% c(7,8,12:14,22:31),"East","West") st_comp_regions can now be used as the dataframe input to the argument alt_region_names in generate_indices(), with “East_West” as the value for the argument regions. The relevant trends can be calculated using just the generate_trends() function. east_west_indices <- generate_indices(jags_mod = jags_mod, jags_data = jags_data, alt_region_names = st_comp_regions, regions = "East_West") east_west_trends <- generate_trends(indices = east_west_indices) ## Exporting the JAGS model You can easily export any of the bbsBayes models to a text file. model_to_file(model = "slope", filename = "my_slope_model.txt") Then, you can modify the model text (e.g., try a different prior) and run the modified model run_model <- function(... , model_file_path = "my_modified_slope_model.txt", ... ) Details coming soon… ## Modifying the JAGS model and data You can even export the bbsBayes model as text, and modify it to add in covariates. For example a GAM smooth to estimate the effect of the day of year on the observations, or an annual weather covariate, or… Then add the relevant covariate data to the jags_data object, and you’re off! We’ll add some more details and examples soon. ## Comparing Models Finally, bbsBayes can be used to run Bayesian cross-validations. For example, the get_final_values() function is useful to provide an efficient starting point for a cross-validation runs, without having to wait for another full burn-in period. Paper that includes an example of how to implement a cross-validation using bbsBayes. Pre-print: https://doi.org/10.1101/2020.03.26.010215 Supplement: NOTE: although bbsBayes includes functions to calculate WAIC, recent work has shown that WAIC performs very poorly with the BBS data (https://doi.org/10.1650/CONDOR-17-1.1). We recommend a k-fold cross-validation approach, as in the above zenodo archive.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.616708517074585, "perplexity": 5412.921391889513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323586043.75/warc/CC-MAIN-20211024142824-20211024172824-00626.warc.gz"}
https://physics.stackexchange.com/questions/362897/collision-between-photon-and-an-isolated-electron
# Collision between photon and an isolated electron Consider an isolated electron and a photon of energy 'hf' suffers collision with it. Then will the electron absorb all the energy of the photon or some amount is absorbed? If so, then does the photon after collision has less frequency than the previous one? • Answered here: physics.stackexchange.com/questions/358727/… - As @AnnaV has stated there, it is called Compton Scattering, "A photon interacts with the electron, the electron becomes off shell because it absorbs part of the four momentum of the incoming photon and a lower energy photon leaves". A full absorption is not possible without a violation of conservation of energy or momentum. – safesphere Oct 15 '17 at 3:08 • "suffers collision with it" - what does 'suffers collision' mean? – Alfred Centauri Oct 15 '17 at 3:12 When a photon scatters off a free electron, it can either scatter elastically, i.e. no change in the frequency of the outgoing, or undergoes what is called Compton scattering. The distributions of the outgoing photonh can be calculated using the expansion in Feynman diagrams to first order : The outgoing photon will have a smaller frequency as the incoming transferred part of its momentum and energy to the electron. For the history see here. • So complete absorption of energy doesn't take place by the electron. Right? – Gurbir Singh Oct 15 '17 at 5:50 • Right, complete absorption will violate energy conservation in the center of mass, because the electron is an elementary particle ( no excited states) and its mass is fixed. – anna v Oct 15 '17 at 5:54 • come to think of it also angular momentum conservation, as the spin1 of the photon cannot be accomodated by the electron at rest and alone. – anna v Oct 15 '17 at 7:43
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8144868612289429, "perplexity": 519.2670892402982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739328.66/warc/CC-MAIN-20200814130401-20200814160401-00431.warc.gz"}
https://rd.springer.com/chapter/10.1007%2F978-3-030-04354-4_3
# Multilayered Surface Continua • Marcus Aßmus Chapter Part of the SpringerBriefs in Applied Sciences and Technology book series (BRIEFSAPPLSCIENCES) ## Abstract The computation of composite structures, as introduced in Chap. , requires the extension of the presented concept of the surface continuum to multiple layers. This is especially true when the physical layer thicknesses differ widely and the mechanical properties of the layer materials are strongly divergent. This is the case with Anti-Sandwiches. In this sense, the present chapter introduces a so-called layer-wise theory. Each layer is considered individually, whereby the coupling is realized via kinematic constraints. However, apart from the consideration of the individual mid surfaces, this procedure also requires the consideration of interfaces of the physical structure. ## References 1. 1. Altenbach H, Altenbach J, Rikards R (1996) Einführung in die Mechanik der Laminat- und Sandwichtragwerke - Modellierung und Berechnung von Balken und Platten aus Verbundwerkstoffen. Deutscher Verlag für Grundstoffindustrie, StuttgartGoogle Scholar 2. 2. Altenbach H, Altenbach J, Kissing W (2004) Mechanics of composite structural elements. Springer, Berlin. 3. 3. Aßmus M, Naumenko K, Altenbach H (2016) A multiscale projection approach for the coupled global-local structural analysis of photovoltaic modules. Compos Struct 158:340–358. 4. 4. 5. 5. Naumenko K, Eremeyev VA (2014) A layer-wise theory for laminated glass and photovoltaic panels. Compos Struct 112:283–291. 6. 6. Neff P (2004) A geometrically exact cosserat shell-model including size effects, avoiding degeneracy in the thin shell limit. Part i: formal dimensional reduction for elastic plates and existence of minimizers for positive cosserat couple modulus. Contin Mech Thermodyn 16(6):577–628. 7. 7. Schulze SH, Pander M, Naumenko K, Altenbach H (2012) Analysis of laminated glass beams for photovoltaic applications. Int J Solids Struct 49(15):2027–2036. 8. 8. Timoshenko SP (1921) On the correction for shear of the differential equation for transverse vibrations of prismatic bars. Philos Mag 41(245):744–746. 9. 9. Timoshenko SP (1922) On the transverse vibrations of bars of uniform cross-section. Philos Mag 43(253):125–131. 10. 10. Weps M, Naumenko K, Altenbach H (2013) Unsymmetric three-layer laminate with soft core for photovoltaic modules. Compos Struct 105:332–339.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8138887286186218, "perplexity": 12504.972754909957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826892.78/warc/CC-MAIN-20181215152912-20181215174912-00448.warc.gz"}
https://www.auburn.edu/cosam/departments/math/research/colloquia.htm
COSAM » Departments » Mathematics & Statistics » Research » Departmental Colloquia # Departmental Colloquia Our department is proud to host weekly colloquium talks featuring research by leading mathematicians from around the world. Most colloquia are held on Fridays at 4pm in Parker Hall, Room 250 (unless otherwise advertised) with refreshments preceding at 3:30pm in Parker Hall, Room 244. DMS Colloquium: Mark Walker Dec 03, 2021 04:00 PM Speaker: Mark Walker (Willa Cather Professor, University of Nebraska--Lincoln) Title: The Total and Toral Rank Conjectures Abstract: Assume $$X$$ is a nice topological space (a compact $$CW$$ complex) that admits a fixed-point free action by a $$d$$-dimensional torus $$T$$. For example, $$X$$ could be $$T$$ acting on itself in the canonical way. The Toral Rank Conjecture, due to Halperin, predicts that the sum of the (topological) Betti numbers of $$X$$ must be at least $$2^d$$.  Put more crudely, this conjecture predicts that it takes at least $$2^d$$ cells to build such a space $$X$$ by gluing them together. Now suppose $$M$$ is a module over the polynomial ring $$k[x_1, \dots, x_d]$$ that is finite dimensional as a $$k$$-vector space. The Total Rank Conjecture, due to Avramov, predicts that the sum of (algebraic) Betti numbers of $$M$$ must be at least $$2^d$$. Here, the algebraic Betti numbers refer to the ranks of the free modules occurring in the minimal free resolution of $$M$$. In this talk, I will discuss the relationship between these conjectures and recent progress toward settling them. Faculty host: Michael Brown DMS Colloquium: Dr. Xuyu Wang Nov 05, 2021 04:00 PM Speaker: Dr. Xuyu Wang (Assistant Professor at California State University, Sacramento) Title: Artificial Intelligence of Things for Robust Wireless Sensing Systems Abstract: With the development of Internet of Things (IoT) and wireless techniques, several wireless signals (e.g., Wi-Fi, RFID, Acoustic, Radar, and LoRa) can be exploited for wireless sensing applications. Artificial Intelligence of Things (AIoT) is an integrative technology, which could leverage artificial intelligence (e.g., deep learning) to develop data-driven IoT applications. To address some fundamental challenges (e.g., black-box model, limited labeled data, different data domains) in deep learning-driven wireless sensing systems, in this talk, I will mainly discuss three robust wireless sensing systems (i.e., deep Gaussian processes for indoor localization, recurrent variational autoencoder for human health monitoring, and meta-learning for human pose estimation) using different wireless IoT devices. Faculty host: Guanqun Cao DMS Colloquium: Hanwen Huang Oct 22, 2021 04:00 PM Speaker: Hanwen Huang (College of Public Health, University of Georgia) Title: LASSO risk and phase transition under dependence For general covariance matrix, we derive the asymptotic risk of LASSO in the limit of both sample size n and dimension p going to infinity with fixed ratio n/p. A phase boundary is precisely established in the phase space. Above this boundary, LASSO perfectly recovers the signals with high probability. Below this boundary, LASSO fails to recover the signals with high probability. While the values of the non-zero elements of the signals do not have any effect on the phase transition curve, our analysis shows that the curve does depend on the signed pattern of the nonzero values of the signal for non-i.i.d. covariance matrix. Underlying our formalism is a recently developed efficient algorithm called approximate message passing (AMP) algorithm. We generalize the state evolution of AMP from i.i.d. case to general case. Extensive computational experiments confirm that our theoretical predictions are consistent with simulation results on moderate size system. Faculty host: Peng Zeng DMS Colloquium: Ivan Yotov Mar 26, 2021 04:00 PM Speaker: Ivan Yotov (University of Pittsburgh, http://www.math.pitt.edu/~yotov/) Title: Stokes-Biot modeling of fluid-poroelastic structure interaction Abstract: We study mathematical models and their finite element approximations for solving the coupled problem arising in the interaction between a free fluid and a fluid in a poroelastic material. Applications of interest include flows in fractured poroelastic media, coupling of surface and subsurface flows, and arterial flows.  The free fluid flow is governed by the Navier-Stokes or Stokes/Brinkman equations, while the poroelastic material is modeled using the Biot system of poroelasticity. The two regions are coupled via dynamic and kinematic interface conditions, including balance of forces, continuity of normal velocity, and no-slip or slip with friction tangential velocity condition. Well-posedness of the weak formulations is established using techniques from semigroup theory for evolution PDEs with monotone operators. Mixed finite element methods are employed for the numerical approximation. Solvability, stability, and accuracy of the methods are analyzed with the use of suitable discrete inf-sup conditions. Numerical results will be presented to illustrate the performance of the methods, including their flexibility and robustness for several applications of interest. Brief Bio: Dr. Ivan Yotov is a Professor in the Department of Mathematics at the University of Pittsburgh. He received his Ph.D. in 1996 from Rice University. Dr. Yotov’s research interests are in the numerical analysis and solution of partial differential equations and large scale scientific computing with applications to fluid flow and transport. His current research focus is on the design and analysis of accurate multiscale adaptive discretization techniques (mixed finite elements, finite volumes, finite differences) and efficient linear and nonlinear iterative solvers (domain decomposition, multigrid, Newton-Krylov methods) for massively parallel simulations of coupled multiphase porous media and surface flows. Other areas of research interest include estimation of uncertainty in stochastic systems and mathematical and computational modeling for biomedical applications. Dr. Yotov is also an adjunct faculty at the McGowan Institute for Regenerative Medicine. Faculty host: Thi-Thao-Phuong Hoang DMS Colloquium: Dr. Shan Yu Mar 19, 2021 04:00 PM Speaker:  Dr. Shan Yu (University of Virginia) Title: Sparse Modeling of Functional Linear Regression via Fused Lasso with Application to Genotype-by-Environment Interaction Studies Abstract: The estimator of coefficient functions in a functional linear model (FLM) based on a small number of subjects is often inefficient. To address this challenge, we propose an FLM based on fused learning. This talk will describe a sparse multi-group FLM to simultaneously estimate multiple coefficient functions and identify groups such that coefficient functions are identical within groups and distinct across groups. By borrowing information from relevant subgroups of subjects, our method enhances estimation efficiency while preserving heterogeneity in model parameters and coefficient functions. We use an adaptive fused lasso penalty to shrink coefficient estimates to a common value within each group. To enhance computation efficiency and incorporate neighborhood information, we propose to use a graph-constrained adaptive lasso with a highly efficient algorithm. This talk will use two real data examples to illustrate the applications of the proposed method on genotype-by-environment interaction studies. This talk features joint work with Aaron Kusmec, Lily Wang, and Dan Nettleton. Brief Bio: Dr. Shan Yu is an Assistant Professor in the Department of Statistics at the University of Virginia. Her research interests include Non-/Semi-Parametric Regression Methods, Functional Data Analysis, Spatial/Spatiotemporal Data Analysis, Statistical Methods for Neuroimaging Data, and Variable Selection for High Dimensional Data. Shan's research has appeared in such journals as the Journal of the American Statistical Association and Statistica Sinica. She earned a Ph.D. in Statistics from Iowa State University in 2020. She joined the University of Virginia in 2020. Host:  Guanqun Cao DMS Colloquium: Aris Winger Feb 26, 2021 04:00 PM Speaker: Aris Winger (Georgia Gwinnett College) Title: Equity and Advocating in the Mathematics Classrooms and Departments Abstract: How do we create mathematical spaces within our classroom that validate and value all students?  What are the steps that we can personally take that will transform the mathematical experience in our classroom for marginalized students?   In this talk, participants will engage in an interactive conversation about the challenges presented when we start to radically imagine different mathematical spaces from one where, for too long, have been marginalizing for too many people. Dr. Winger also has a new book out about advocating for students of color in mathematics.  Here is the link for this book : https://www.amazon.com/dp/B08QC3SHFG/ref=cm_sw_em_r_mt_dp_cD37FbHZ6ZRJD in case you would like to pick up the book.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4312184751033783, "perplexity": 1592.157868223242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304872.21/warc/CC-MAIN-20220125190255-20220125220255-00318.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/seems-like-problems-first-wrote-topic-hadto-edit--problem-like-without-textbook-part-calcu-q378615
Seems like i had some problems when i first wrote this topic, i hadto edit it. So problem is like this ( for those without this Textbook ): Part A: Calculate the change in air pressure you willexperience if you climb a 1500 mountain, assuming that thetemperature and air density do not change over this distance andthat they were 22.0 and 1.20 respectively, at thebottom of the mountain. Part B: If you took a 0.500 breath at the foot of themountain and managed to hold it until you reached the top, whatwould be the volume of this breath when you exhaled it there? I have already been helped a bit with this problem but i am stucksomewhere and i need some help. So the solution is like this: Step 1 Given temperature and density of air is constant Given T=22sC=295 K Density ρ=1.2 kg/m^3 We have pressure P at an altitudey P=P0 ^ -Mgy/KT P0=pressure of air at sea level=1.013x 10^5 Pa M=molar mass of air=28.8 x 10^-3 kg/mol g=acceleration due to gravity=9.8m/s^2 y=height of the altitude=1000m R=ideal gas constant=8.3145J/mol K T=absolute temperature of air=22+273=295K Step 2 a)Also we have the density of air P=P0*M/RT *M= ρ RT/P0 *Mgy/RT= ρ RT/P0 *gy/RT= ρ gy/P0 *P=P0e^- ρ gy/P0 =1.013 * 10^5 *e ^ -1.2*9.8*1000/1.013*10^5 =0.9019*10^5 Pa P0-P=(1.013-0.919) * 10^5 =1.111*10^4 Pa b)Step 3 We have P1V1=P2V2 P1=1.013*10^5 Pa P2=0.9019*10^5 Pa V1=0.5 L V2=(P1/P2) * V1 V2=0.56 L But i am stuck because i don't know what "e" is or what's itsvalue in case it is a constant. Help would be greatly appreciated. Regards, Alex.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9222099184989929, "perplexity": 5495.572226222482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095711.51/warc/CC-MAIN-20150627031815-00198-ip-10-179-60-89.ec2.internal.warc.gz"}
https://reference.globalspec.com/standard/3811388/astm-c992-89-1997
ASTM International - ASTM C992-89(1997) Standard Specification for Boron-Based Neutron Absorbing Material Systems for Use in Nuclear Spent Fuel Storage Racks historical Organization: ASTM International Publication Date: 10 May 1997 Status: historical Page Count: 3 ICS Code (Nuclear energy in general): 27.120.01 scope: 1.1 This specification defines criteria for boron-based neutron absorbing material systems used in racks for storage of nuclear light water reactor (LWR) spent-fuel assemblies or disassembled components in a pool environment, or both. 1.2 The materials systems described herein shall be functional for their service life in the operating environment of a nuclear reactor spent-fuel pool. 1.3 A number of acceptable boron-based absorbing materials combinations are currently available while others are being developed for use in the future. This specification defines criteria essential and applicable to all materials combinations and identifies parameters a buyer should specify to satisfy a unique or particular requirement. 1.4 Compliance with this specification does not relieve the seller or the buyer from obligation to conform to applicable federal regulations governing the storage of nuclear fuel. Document History January 1, 2020 Standard Specification for Boron-based Neutron Absorbing Material Systems for Use in Nuclear Fuel Storage Racks in Pool Environment 1.1 This specification defines criteria for boron-based neutron absorbing material systems used in racks in a pool environment for storage of nuclear light water reactor (LWR) spent-fuel assemblies... January 15, 2016 Standard Specification for Boron-Based Neutron Absorbing Material Systems for Use in Nuclear Fuel Storage Racks in a Pool Environment 1.1 This specification defines criteria for boron-based neutron absorbing material systems used in racks in a pool environment for storage of nuclear light water reactor (LWR) spent-fuel assemblies... February 1, 2011 Standard Specification for Boron-Based Neutron Absorbing Material Systems for Use in Nuclear Spent Fuel Storage Racks 1.1 This specification defines criteria for boron-based neutron absorbing material systems used in racks in a pool environment for storage of nuclear light water reactor (LWR) spent-fuel assemblies... February 15, 2006 Standard Specification for Boron-Based Neutron Absorbing Material Systems for Use in Nuclear Spent Fuel Storage Racks 1.1 This specification defines criteria for boron-based neutron absorbing material systems used in racks in a pool environment for storage of nuclear light water reactor (LWR) spent-fuel assemblies... ASTM C992-89(1997) May 10, 1997 Standard Specification for Boron-Based Neutron Absorbing Material Systems for Use in Nuclear Spent Fuel Storage Racks 1.1 This specification defines criteria for boron-based neutron absorbing material systems used in racks for storage of nuclear light water reactor (LWR) spent-fuel assemblies or disassembled...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8786895871162415, "perplexity": 9810.322773892292}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347388012.14/warc/CC-MAIN-20200525063708-20200525093708-00486.warc.gz"}
http://ohiouniversityfaculty.com/mohlenka/sagecalculus/linear.html
• This web page describes an activity within the Department of Mathematics at Ohio University, but is not an official university web page. • If you have difficulty accessing these materials due to visual impairment, please email me at [email protected]; an alternative format may be available. main calculus sage page # Linear Algebra linear algebra quick reference You can create matrices and scale and add them. You can multiply matrices. You can make identity matrices and transpose matrices. You can compute the norm of a vector You can add vectors in two dimensions and illustrate the result. You can add vectors in three dimensions and illustrate the result. You can solve a system of linear equations by substitution and show the steps. You can do step-by-step Gaussian elimination. You can compute the row-reduced form of a matrix. Note that terminology differs between textbooks. If your input has only integers or rational numbers then the result will be expressed in rational numbers; if it includes any decimals (e.g. 1.0) then the result will be expressed in decimals. You can compute the inverse of a matrix using the row-reduced form or directly. You can compute pivoted LU decompositions. You can compute ranks and determinants of matrices. You can compute the characteristic polynomial, and solve for its zeros. You can compute the eigenvalues and eigenvectors of a matrix. You can compute the Jordan canonical form of a matrix, which will be diagonal if the matrix is diagonalizable. You can compute the exponential of a variable times a matrix. If the eigenvalues are complex, it will look complex. You can compute the QR decomposition of a matrix. You can manually compute the QR decomposition of a matrix to illustrate the steps. You can use the QR algorithm to compute eigenvalues of a matrix. The diagonal should converge to the eigenvalues (under some assumptions). You can compute the least-squares solution to an overdetermined system. You can compute various vector and matrix norms. You can compute the dot product of vectors in any dimension. You can compute the cross product of vectors in 3 dimensions, and illustrate the result. Martin J. Mohlenkamp
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9513174295425415, "perplexity": 412.5831167273146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822480.15/warc/CC-MAIN-20171017181947-20171017201947-00607.warc.gz"}
http://mathoverflow.net/questions/9581/equivariant-singular-cohomology/9583
# Equivariant singular cohomology One can define the $G$-equivariant cohomology of a space $X$ as being the ordinary singular cohomology of $X \times_G EG$ --- I think this is due to Borel? (See e.g. section 2 of these notes) Alternatively if $X$ is a manifold, we also have $G$-equivariant de Rham cohomology, defined in terms of $G$-equivariant differential forms --- I think this is due to Cartan? (See e.g. section 3 of loc. cit.) I suspect this is extremely standard or obvious, but if it is, I don't know where it's written down: Is it possible to define equivariant cohomology of a topological space in terms of some notion of "equivariant singular cochains", that is, without using the Borel construction? - Isn't Cartan's equivariant cohomology the hypercohomology of the de Rham complex? –  Mariano Suárez-Alvarez Dec 23 '09 at 2:35 Do you really mean just the de Rham complex? There's no $G$−stuff in the de Rham complex, and shouldn't equivariant stuff involve $G$−stuff? –  Kevin H. Lin Dec 23 '09 at 2:47 If $G$ acts on $M$, then $G$ acts on $M$'s de Rham complex $\Omega^\bullet(M)$. Now take $\mathbb{H}^\bullet(G,\Omega^\bullet(M))$. This gives you an equivariant theory. You can do it replacing $\Omega^\*(M)$ by $S^\*(M)$, the singular complex, of course. Since hypercohomology sees only the quasi-isomorphism type of its argument, you get isomorphisms between what you get from de Rham and what you get from $S^\*(M)$, &c. –  Mariano Suárez-Alvarez Dec 23 '09 at 2:52 Ah, ok, so this is, umm, hyper-group-cohomology? –  Kevin H. Lin Dec 23 '09 at 3:15 Those adjunctions can only exist if G is discrete. It's not clear from the question if we are assuming this or if G was allowed to be, say, a compact Lie group. –  Chris Schommer-Pries Dec 24 '09 at 13:36 Here's an answer which I learned from Goresky-Kottwitz-MacPherson's paper on equivariant cohomology and Koszul duality: they use some notion of geometric chain which is probably something like subanalytic chains, but anyway, the idea is as follows. Suppose $G$ is a compact Lie group of dimension d. An abstract equivariant $k$-chain $c$ is a $(k+d)$-dimensional chain in some $\mathbb R^n$ (or perhaps it's better to say $\mathbb R^\infty$) equipped with a free action of $G$. Then if $X$ is a $G$-space, an equivariant chain in $X$ is a $K$-equivariant map from an abstract chain to $X$. You can obviously form a chain complex out of these things, and the result gives you the equivariant cohomology of $X$ (in the Borel construction say). - Yeah, I was guessing that it might be something along those lines. But then how do you put a free action of $G$ on $\mathbb{R}^n$? And then you still have to show that the result is independent of this choice? –  Kevin H. Lin Dec 23 '09 at 3:09 An abstract chain just has to have a $G$ action, not the ambient space. The idea I guess is that a chain in the Borel construction can be pulled back to $X\times EG$ and then I can approximate $EG$ by a smooth finite-dimensional manifold $E$ containing the pulled-back chain. Embed $X\times E$ in $\mathbb R^\infty$ and you have geometric chain (or rather its graph). Anyway the paper is nicely written if I remember, so probably it's better to read what they said! –  Kevin McGerty Dec 23 '09 at 3:17 Oops, yes, I meant the chain. Ok, I'll try to take a peek at the paper. –  Kevin H. Lin Dec 23 '09 at 3:23 In 1965 or so, Glen Bredon defined ordinary equivariant cohomology, ordinary meaning that it satisfies the dimension axiom: For each coefficient system $M$ (contravariant functor from the orbit category of G to the category of Abelian groups), there is a unique cohomology theory $H^*_G(-;M)$ such that, when restricted to the orbit category, it spits out the functor $M$. Just as in the nonequivariant world, it can be defined using either singular or cellular cochains, the latter defined using $G$-CW complexes. This works as stated for any topological group $G$. For an abelian group $A$, Borel cohomology with coefficients in $A$, $H^*(EG\times_G X;A)$ is the extremely special case in which one takes $M$ to be the constant coefficient system $\underline{A}$ at the group $A$ and replaces $X$ by $EG\times X$. That is, $$H^*_G(EG\times X; \underline{A}) = H^*(EG\times_G X;A)$$ - You can think of it as the cohomology of the simplicial manifold $X\leftleftarrows X\times G \cdots$ where the $n$-simplices are $X\times G^n$ and the face maps either act on $X$ or multiply two consecutive entries. Of course, some people will tell you that that is really the same as the Borel construction, but if you're willing to interpret things that liberally, you'll never get away from the Borel construction. - How does one define cohomology of a simplicial manifold? –  Kevin H. Lin Dec 23 '09 at 17:12 There is an abelian category of "simplicial sheaves over a simplicial manifold" and there will be enough injectives. So you just take any injective resolution. The standard reference (in the scheme case) is "Etale Homotopy of Simplicial Schemes" by E. Friedlander. I think there was also some work by Dupont on exactly this equivariant case. Anyway, in the case you are looking at you can just use the usual de Rham resolution of Z in each level of the simplicial set. You will get a double complex (one direction from the de Rham differential, one simplicial). You just take the total cohomology. –  Chris Schommer-Pries Dec 24 '09 at 13:34 Thanks, Chris! That was very helpful. –  Kevin H. Lin Dec 24 '09 at 13:50 of course, "... each level of the simplicial set" should be read as "... each level of the simplicial space". –  Chris Schommer-Pries Dec 24 '09 at 15:59 This isn't singular, but in the same spirit as what you want (i.e. not the Borel construction): Take the cellular chain complex $C(X)$ of a $G$-complex $X$. You can define cohomology groups with coefficients in a chain complex, and $H^*(G,C(X))$ is defined to be the equivariant cohomology of $X$ (for $X$ finite-dimensional and $G$ finite). This is explained in Ken Brown's $\textit{Cohomology of Groups}$ (chapter VII). - $G$ induces an action on the singular (co)chain complex $S(X)$ and you can use $H^*(G;S(X))$ in the same way as described by Chris. –  Ralph Feb 24 '11 at 23:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9763346910476685, "perplexity": 333.9570256389577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131295619.33/warc/CC-MAIN-20150323172135-00189-ip-10-168-14-71.ec2.internal.warc.gz"}
https://irzu.org/research/jetbrains-ide-phpstorm-set-all-files-folders-to-brown-background-and-weird-structure/
# jetbrains ide – PhpStorm set all files / folders to brown background and weird structure So I just noticed that my PhpStorm has set a background color to my project and also the structure is weird. All best explained by some screenshots. This is how it is looking now, I cannot even see my App folder etc here: After I click on Project in the top and then select Project Files, I can see my structure again like normal: My question is, what does this mean and how can I set it “back to normal”? As for me, “normal” means the following: I can see all my files in Project and also there is no brown background color. What I have done so far, as suggested from Google searches, is: 1. Close the project, remove it from recent projects and opening it again in PhpStorm. 2. Removing the .idea folder and opening my project again. 3. Reload All From Disk. 4. Invalidate Caches. P.S. This is a new Laravel project, with only some minor changes and all files added to git, and also just did one last commit. Nothing “fixes” this.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5002814531326294, "perplexity": 2545.23367426316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710890.97/warc/CC-MAIN-20221202014312-20221202044312-00480.warc.gz"}
https://infoscience.epfl.ch/record/76834
Infoscience Journal article # The synthesis and characterisation of bis(phosphane)-linked (eta(6)-p-cymene)ruthenium(II)-borane compounds The reaction of [(η6-p-cymene)RuCl2]2 with some bis(phosphane) ligands (dppm, dppe, dppv, dppa, dpp14b, dppf) has been investigated. In general mixtures of products were obtained, although the pendant phosphane complexes [(η6-p-cymene)RuCl21-dppv)] and [(η6-p-cymene)RuCl21-dppa)] were isolated and characterized in the solid state by X-ray diffraction. The later complex was obtained in lower yield and undergoes an equilibration reaction resulting in the formation of a dimeric species, where the dppa bridges two ruthenium centres, and uncoordinated phosphane; the bridging species was also structurally characterised in the solid state. In contrast, the reaction of [(η6-p-cymene)RuCl2(PPh3)] with dppa in the presence of [NH4]PF6 results in the formation of [(η6-p-cymene)RuCl(PPh3)(η1-dppa)]PF6, which is stable in solution. A series of linked ruthenium-borane complexes, viz. [(η6-p-cymene)RuCl21-phosphane-BH3)] (phosphane = dppm, dppe, dppv, dppa, dpp14b, dppf) and [(η6-p-cymene)RuCl(PPh3)(η1-dppa-BH3)]PF6 have been prepared from isolated pendant phosphane complexes, those generated in situ, or from a preformed phosphane-borane adduct. The solid-state structures of [(η6-p-cymene)RuCl21-dppm-BH3)], [(η6-p-cymene)RuCl21-dppe-BH3)] and [(η6-p-Cymene)RuCl21-dppv-BH3)] have been determined by X-ray diffraction analysis.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8402672410011292, "perplexity": 13338.541564279314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00111-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/what-is-the-coalescence-factor.650537/
What is the coalescence factor? 1. Nov 8, 2012 Waxbear I am reading an article on experimental nuclear physics. The article is about deuteron and triton production in Pb + Pb collisions. In the article they mention the coalescence factor which is given by: $B_{A}=A\frac{2s_{A}+1}{2^{A}}R^{N}_{np}\left(\frac{h^{3}}{m_{p}\gamma V}\right)^{A-1}$ The coalescence factor has something to do with the formation of light clusters A(Z,N). So A is the mass number of the cluster, the R_np thing is the ratio of neutrons to protons participating in the collision, gamma is just the lorentz factor related to the velocity of the cluster and V is the volume of of particles at freeze-out (after hadronization, when there are no more strong interactions between the nucleons). I don't know what s_A is though. I have worked out the units for A = 2 to be: $s_{A}\cdot \frac{ev^{2}s}{m}+\frac{ev^{2}s}{m}$ But i don't know the units of s_A. I was hoping for this to turn out unit-less, since it could then be interpreted as a sort of probability of a cluster A(Z,N) to form. Now i'm not sure what it is. I apologize if this post was supposed to go in homework. Technically it is a sort of homework question, since i am supposed to present the article as an exam tomorrow. But i figured that there was a bigger chance that someone could help me with this on the HEP board. 2. Nov 8, 2012 Staff Emeritus It has to be unitless, since you add 1 to it. 3. Nov 8, 2012 Waxbear Of course, i didn't even consider that. Okay, so i guess the coalescence factor has dimensions of: $\frac{ev^{2}s}{m}$ I'm still not sure exactly sure about what the coalescence factor is though. 4. Nov 8, 2012 Staff: Mentor Where does your formula come from? I found a similar one in Coalescence and flow in ultra-relativistic heavy ion collisions (pdf, equation 6.2), but that uses (2pi)^3 in the nominator of the brackets. I don't see how masses cancel in the equation. A cross-section (1/m^2) or a dimensionless fraction would be a nice result, and I think c=1 is implied. 5. Nov 8, 2012 Waxbear I probably did something wrong when i tried to find the dimensions and yes i think you are right that c = 1 is implied. I got the equation from an old article called "Deuteron and triton production with high energy sulphur and lead beams". It's from 2001, so you might not be able to find it on the internet anymore. Yes a cross-section or a fraction would be very nice, however, i found a plot later in the article where B_2 has dimensions: $Gev^{2}c^{-3}$ Which is mass times momentum. Weird.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9053815007209778, "perplexity": 601.9506968820738}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158429.55/warc/CC-MAIN-20180922123228-20180922143628-00338.warc.gz"}