url
stringlengths 14
1.76k
| text
stringlengths 100
1.02M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
http://ilovephilosophy.com/viewtopic.php?f=2&t=194761&p=2721111
|
## Carleas - Is ILP capitalist-philanthropic or socialist?
This is the place to shave off that long white beard and stop being philosophical; a forum for members to just talk like normal human beings.
Moderator: MagsJ
### Carleas - Is ILP capitalist-philanthropic or socialist?
How do you perceive the economic model?
Last edited by Jakob on Thu Feb 28, 2019 5:47 pm, edited 1 time in total.
For behold, all acts of love and pleasure are my rituals
Jakob
ILP Legend
Posts: 6407
Joined: Sun Sep 03, 2006 9:23 pm
Location: look at my suit
### Re: Carleas - Is ILP capitalist-philanthropic or socialist?
Next up: Which ought he to be?
He was like a man who wanted to change all; and could not; so burned with his impotence; and had only me, an infinitely small microcosm to convert or detest. John Fowles
Start here: viewtopic.php?f=1&t=176529
Then here: viewtopic.php?f=15&t=185296
And here: viewtopic.php?f=1&t=194382
iambiguous
ILP Legend
Posts: 29203
Joined: Tue Nov 16, 2010 8:03 pm
Location: baltimore maryland
### Re: Carleas - Is ILP capitalist-philanthropic or socialist?
He? Is ILP male?
For behold, all acts of love and pleasure are my rituals
Jakob
ILP Legend
Posts: 6407
Joined: Sun Sep 03, 2006 9:23 pm
Location: look at my suit
### Re: Carleas - Is ILP capitalist-philanthropic or socialist?
Jakob wrote:He? Is ILP male?
Uh, oops?
Next up: Which ought it to be?
He was like a man who wanted to change all; and could not; so burned with his impotence; and had only me, an infinitely small microcosm to convert or detest. John Fowles
Start here: viewtopic.php?f=1&t=176529
Then here: viewtopic.php?f=15&t=185296
And here: viewtopic.php?f=1&t=194382
iambiguous
ILP Legend
Posts: 29203
Joined: Tue Nov 16, 2010 8:03 pm
Location: baltimore maryland
### Re: Carleas - Is ILP capitalist-philanthropic or socialist?
soundcloud
promethean75
Posts: 301
Joined: Thu Jan 31, 2019 7:10 pm
### Re: Carleas - Is ILP capitalist-philanthropic or socialist?
Capitalism is the antithesis of the love of anything remotely resembling wisdom.
Republicans are brain damaged https://journals.plos.org/plosone/artic ... ne.0052970
Conservatism correlates inversely with education and intelligence viewtopic.php?f=3&t=194612
Serendipper
Philosopher
Posts: 2064
Joined: Sun Aug 13, 2017 7:30 pm
### Re: Carleas - Is ILP capitalist-philanthropic or socialist?
ILP is more like an oil-rich state with a benevolent god-king. It has a natural resource in the form of my day job, which is sufficient to provide for the needs of its citizens, but also means that all decision-making is ultimately subject to my whim. Enlightened leader that I am, I generally choose not to exercise my locally infinite power.
It would be more capitalist if it had any wealth-like feathers, e.g. a karma system or upvoting, and if greater site wealth meant greater site power. It would be more socialist if it had a flat-rate fee to participate (and maybe provided our lesser lights with editing help). It would be more of either if the costs were supported by e.g. ad revenue, such that it depended on attracting new (and/or wealthier) participants.
User Control Panel > Board preference > Edit display options > Display signatures: No.
Carleas
Magister Ludi
Posts: 5724
Joined: Wed Feb 02, 2005 8:10 pm
Location: Washington DC, USA
### Re: Carleas - Is ILP capitalist-philanthropic or socialist?
I guess god-king it is then.
I would appreciate it personally if that would become your title.
For behold, all acts of love and pleasure are my rituals
Jakob
ILP Legend
Posts: 6407
Joined: Sun Sep 03, 2006 9:23 pm
Location: look at my suit
### Re: Carleas - Is ILP capitalist-philanthropic or socialist?
Jakob wrote:I would appreciate it personally if that would become your title.
User Control Panel > Board preference > Edit display options > Display signatures: No.
Carleas
Magister Ludi
Posts: 5724
Joined: Wed Feb 02, 2005 8:10 pm
Location: Washington DC, USA
### Re: Carleas - Is ILP capitalist-philanthropic or socialist?
Carleas wrote:ILP is more like an oil-rich state with a benevolent god-king. It has a natural resource in the form of my day job, which is sufficient to provide for the needs of its citizens, but also means that all decision-making is ultimately subject to my whim. Enlightened leader that I am, I generally choose not to exercise my locally infinite power.
The Kingdom of ILP benevolently overseen by the House of Carleas
It would be more capitalist if it had any wealth-like feathers, e.g. a karma system or upvoting, and if greater site wealth meant greater site power. It would be more socialist if it had a flat-rate fee to participate (and maybe provided our lesser lights with editing help). It would be more of either if the costs were supported by e.g. ad revenue, such that it depended on attracting new (and/or wealthier) participants.
Wouldn't be more capitalistic if you found a way to capitalize on it (ie ads, fees)? As it stands, it's more of a social service offering a wealth of wisdom for free, at your expense. The costs should be a charitable contribution for tax purposes
Serendipper
Philosopher
Posts: 2064
Joined: Sun Aug 13, 2017 7:30 pm
### Re: Carleas - Is ILP capitalist-philanthropic or socialist?
I guess there's two ways to interpret the question. One is the economics of ILP in the context of the broader economy, in which case it's like a privately maintained park, which is like a throwback to old patronage systems and not really either capitalist or socialist.
The other question, which is the one I answered, is the social order ILP from within ILP, i.e. who are the citizens, what's the government look like, what are the resources and who controls access. I think it's similar to Singapore, with a dictator that owns everything but is mostly hands-off, other than weird interventions like prohibiting chewing gum.
User Control Panel > Board preference > Edit display options > Display signatures: No.
Carleas
Magister Ludi
Posts: 5724
Joined: Wed Feb 02, 2005 8:10 pm
Location: Washington DC, USA
### Re: Carleas - Is ILP capitalist-philanthropic or socialist?
Carleas wrote:I guess there's two ways to interpret the question.
It just keeps getting more and more complicated lol
The other question, which is the one I answered, is the social order ILP from within ILP, i.e. who are the citizens, what's the government look like, what are the resources and who controls access.
How do fees and ads fit into that? It seems like you answered one question one way and the other another way:
It would be more capitalist if it had any wealth-like feathers, e.g. a karma system or upvoting, and if greater site wealth meant greater site power.
It would be more socialist if it had a flat-rate fee to participate (and maybe provided our lesser lights with editing help).
It would be more of either if the costs were supported by e.g. ad revenue, such that it depended on attracting new (and/or wealthier) participants.
Btw I'm glad there is no karma or voting: appeal to popularity.
So if there is no way to gain site-wealth, then everyone is perpetually equal, therefore it's socialism. Right? My opinion will never be worth more than anyone else's opinion. There is no way for me to consolidate site-wealth.
Serendipper
Philosopher
Posts: 2064
Joined: Sun Aug 13, 2017 7:30 pm
### Re: Carleas - Is ILP capitalist-philanthropic or socialist?
Oh god-king, please accept my humble contribution:
Carleas wrote:It would be more capitalist if it had any wealth-like feathers, e.g. a karma system or upvoting, and if greater site wealth meant greater site power.
A karma system or upvoting would establish a kind of currency, yes, but Capitalism requires the ability not only to spend such a currency, but also to invest it into some kind of private ownership - perhaps allowing members to spend their currency on owning sub-forums to run as they please, or at least within your divine monarchist law. Competition would then theoretically ensue, and the rubbish rulers would go out of business and have to sell their forum to someone more worthy.
At the moment, there appear to be something like feudal lords ultimately under your rule, but offering military services (albeit more in the form of an internal policing system) as the serfs work the fields i.e. post (the majority of) threads and replies.
What I find interesting is that the Marxist "Historical Materialism" correlates with population size, indicating that this forum is bigger than one run in a Tribalistic way, but not so big as to become unmanagable even by moderators - which would then require something like the above described "capitalist-like" system for further decentralisation.
Carleas wrote:It would be more socialist if it had a flat-rate fee to participate (and maybe provided our lesser lights with editing help).
Sticking within the interpretation of the question as applying to the forum's internal structure (rather than in terms of its outside funding to exist at all) Socialism would only happen if the size of the forum became so unwieldy even for the Capitalist model to work effectively enough, that members would overthrow the capitalist ownership of the sub-forums, until it became communally established between members how to run and govern each sub-forum in a Communist model.
It would be an interesting social experiment to see what would happen to a forum of such size that it would come to this, and to see if somewhere along the way - some authoritarian leader turned up to lead this revolution and subsequently attempted to re-take the Carleas god-king role over a forum population much larger than can be centrally managed - causing it to all fall apart, as history is supposed to indicate "necessarily" happens...
Silhouette
Philosopher
Posts: 3591
Joined: Tue May 20, 2003 1:27 am
Location: Existence
### Re: Carleas - Is ILP capitalist-philanthropic or socialist?
Serendipper wrote:How do fees and ads fit into that?
Fees/ads change things orthogonally to capitalist/socialist, but would move the site towards a policy area where that question is more meaningful. I'd say it's not socialist because citizens lack ownership, both private and public. Fees, for example, would give citizens a kind of ownership. Ads, by contrast, would effectively monetize contributions rather than citizens, so that citizens would have power over the state in the form of bargaining power (i.e. "I'll keep adding $$content if you make change xyz"). As it stands, there's no requirement for value generation tied to the health of the site (at least in terms of the provision of necessary services, i.e. hosting etc.). That disconnect takes us out of the socialist/capitalist question. I would think you'd want to exclude oil-rich states from socialism, since the outcomes there are generally pretty shitty, even though they technically meet a lot of the criteria for being socialist. Serendipper wrote:Btw I'm glad there is no karma or voting: appeal to popularity. We're not ever likely to have a system like this because it's a well-above-zero lift to implement, but conceptually I go back and forth on its utility. It can definitely be overdone and lead to bad outcomes, particularly on a site that tries to accommodate controversial worldviews. But some minimal version could improve things, especially by catching and demoting the overlooked dreck, and also by calling attention to particularly solid contributions. Arguably there is a super minimal karma system: new user permissions are restricted for their first few posts, and non-custom ranks are tied to post counts. It's minimal enough that no one thinks of this as a karma system, but it's basically treating post count as karma. Silhouette wrote:A karma system or upvoting would establish a kind of currency, yes, but Capitalism requires the ability not only to spend such a currency, but also to invest it into some kind of private ownership - perhaps allowing members to spend their currency on owning sub-forums to run as they please, or at least within your divine monarchist law. Competition would then theoretically ensue, and the rubbish rulers would go out of business and have to sell their forum to someone more worthy. I basically agree, although the analogy begins to breakdown as we get more literal. To have a true economy, we'd need some kind of currency which could both be earned by actions taken on the site, and spent on features. So, for example, users might get a certain amount of karma upon joining, spend karma both to post and to read others' posts, and receive some part of the karma that others spend to read their posts. Karma could be spend to found new forums or to promote posts or the like. This is actually a fascinating thought experiment and it would be interesting to observe, but I don't know that it would produce the best philosophy (as opposed to e.g. lots of threads full of porn and salacious rumors about our dear god-king). Silhouette wrote:At the moment, there appear to be something like feudal lords ultimately under your rule, but offering military services (albeit more in the form of an internal policing system) as the serfs work the fields i.e. post (the majority of) threads and replies. Yes, we do have an unelected nobility with significant power and near-absolute control over their fiefdom. But I'm not sure that the users are serfs. The value proposition that ILP offers to users is the opportunity posts in a place where other users will see them. That would be something like a serf working a field in exchange for the opportunity to work the field beside her friends. Silhouette wrote:What I find interesting is that the Marxist "Historical Materialism" correlates with population size, indicating that this forum is bigger than one run in a Tribalistic way, but not so big as to become unmanagable even by moderators - which would then require something like the above described "capitalist-like" system for further decentralisation. I think we actually got to the point where things became unmanageable by moderators, and we need either a greater resource expenditure or a decentralized system. Instead, we failed to deliver either, people got frustrated and left, and we shrank back down to a size that could be managed by moderators. Which is to say that the causal connection might go the other way: if we implemented the capitalist-like (or, more accurately, market-like) system, we would probably see more growth. Silhouette wrote:It would be an interesting social experiment to see what would happen to a forum of such size that it would come to this, and to see if somewhere along the way - some authoritarian leader turned up to lead this revolution and subsequently attempted to re-take the Carleas god-king role over a forum population much larger than can be centrally managed - causing it to all fall apart, as history is supposed to indicate "necessarily" happens... There are some parallels in what you're saying to what has happened with Facebook, Reddit, and Twitter over the past few years. Those platforms grew very rapidly, and experienced problems with moderation, leading to crackdowns followed by large-scale defections and the creation of new independent 'states'. Reddit and Twitter seem to have weathered the storm better, at least in terms of quality of discussion. Facebook used more communist-like central planning in the form of algorithmic moderation, and Reddit used more capitalist-like decentralization in the form of subreddits and karma. Twitter's approach has some lighter moderation plus organic controls of liking/retweeting/unfollowing/muting/blocking. The platforms have other differences, but it does appear that one dimension on which they compete with each other is social policy. Carleas wrote:wealth-like feathers BTW, I meant to write "wealth-like features", but "wealth-like feathers" is a funny and evocative typo, and I wish I were clever enough to come up with that sort of thing intentionally. User Control Panel > Board preference > Edit display options > Display signatures: No. Carleas Magister Ludi Posts: 5724 Joined: Wed Feb 02, 2005 8:10 pm Location: Washington DC, USA ### Re: Carleas - Is ILP capitalist-philanthropic or socialist? Carleas wrote: Serendipper wrote:How do fees and ads fit into that? I'd say it's not socialist because citizens lack ownership, both private and public. Fees, for example, would give citizens a kind of ownership. But fees are more like rent, right? I still wouldn't own anything except what rights are allot by the TOS agreement. Now if you were to issue shares.... Ads, by contrast, would effectively monetize contributions rather than citizens, so that citizens would have power over the state in the form of bargaining power (i.e. "I'll keep adding$$ content if you make change xyz").
But don't I have that power now? You said speech maximization is your goal, so I could still offer my content-creation as a bargaining chip.
I would think you'd want to exclude oil-rich states from socialism, since the outcomes there are generally pretty shitty, even though they technically meet a lot of the criteria for being socialist.
There are many definitions of socialism, but one I prefer describes one pole of the dichotomy of dispersal/accretion of wealth. So even though the oil-rich states are sometimes lacking democracy and citizen-ownership of resources, the wealth is still distributed rather than hoarded. The only reason for a king to distribute wealth is for the good of society (social). There is no law saying the king has to be benevolent.
the outcomes there are generally pretty shitty
Brunei has the second-highest Human Development Index among the Southeast Asian nations, after Singapore, and is classified as a "developed country".[13] According to the International Monetary Fund (IMF), Brunei is ranked fifth in the world by gross domestic product per capita at purchasing power parity. The IMF estimated in 2011 that Brunei was one of two countries (the other being Libya) with a public debt at 0% of the national GDP. Forbes also ranks Brunei as the fifth-richest nation out of 182, based on its petroleum and natural gas fields.[14] https://en.wikipedia.org/wiki/Brunei
The biggest problem in Brunei is the Islamic religion.
Norway doesn't have that problem. Norway typically tops every measure of prosperity.
Norway has had the highest Human Development Index ranking in the world since 2009, a position also held previously between 2001 and 2006.[20] It also had the highest inequality-adjusted ranking[21][22][23] until 2018 when Iceland moved to the top of the list.[24] Norway ranked first on the World Happiness Report for 2017[25] and currently ranks first on the OECD Better Life Index, the Index of Public Integrity, and the Democracy Index.[26] Norway has one of the lowest crime rates in the world.[27]
On a per-capita basis, Norway is the world's largest producer of oil and natural gas outside of the Middle East.
Norway is a unitary constitutional monarchy with a parliamentary system of government, wherein the King of Norway is the head of state and the prime minister is the head of government. Power is separated among the legislative, executive and judicial branches of government, as defined by the Constitution, which serves as the country's supreme legal document.
https://en.wikipedia.org/wiki/Norway
The shitty outcomes are either a result of religion or failure to distribute wealth (lack of socialism - ie Venezuela, owner of the world's largest oil reserve).
Serendipper wrote:Btw I'm glad there is no karma or voting: appeal to popularity.
We're not ever likely to have a system like this because it's a well-above-zero lift to implement, but conceptually I go back and forth on its utility. It can definitely be overdone and lead to bad outcomes, particularly on a site that tries to accommodate controversial worldviews. But some minimal version could improve things, especially by catching and demoting the overlooked dreck, and also by calling attention to particularly solid contributions.
You have much faith in people lol. The most sensible and factually accurate posts on zerohedge almost always have the most downvotes and consequently I've arranged for the comments to be displayed starting with the most downvoted.
One upgrade I could definitely get behind is to make the site more picture and video friendly. It would be nice to drag n drop and have videos cued instead of asking people to forward to a specific time.
Arguably there is a super minimal karma system: new user permissions are restricted for their first few posts, and non-custom ranks are tied to post counts. It's minimal enough that no one thinks of this as a karma system, but it's basically treating post count as karma.
I know and I don't care for either one I mean, ok, I can see value in recognizing new users to welcome them, but I don't like accumulating clout. It's almost like getting older
Silhouette wrote:A karma system or upvoting would establish a kind of currency, yes, but Capitalism requires the ability not only to spend such a currency, but also to invest it into some kind of private ownership - perhaps allowing members to spend their currency on owning sub-forums to run as they please, or at least within your divine monarchist law. Competition would then theoretically ensue, and the rubbish rulers would go out of business and have to sell their forum to someone more worthy.
I basically agree, although the analogy begins to breakdown as we get more literal. To have a true economy, we'd need some kind of currency which could both be earned by actions taken on the site, and spent on features. So, for example, users might get a certain amount of karma upon joining, spend karma both to post and to read others' posts, and receive some part of the karma that others spend to read their posts. Karma could be spend to found new forums or to promote posts or the like.
Stackexchange is essentially like that. You sign up and receive 10 points. After 100 points you get powers to edit questions, improve grammar, etc. After 1000 points you get moderation powers (question deletion, locking, etc). And so on. I left specifically because of it. It is a good analogy for capitalism though: the lucky first-comers have all the power to suppress competition and delete dissenting opinion, cementing their power. And the guy asking the question, who by definition cannot judge a good answer, has the power to award 15 points to the person who supplies the answer that he thinks is best, which usually happens before better answers have been submitted. The whole experience is hellish and I've heard similar complaints about wikipedia.
Serendipper
Philosopher
Posts: 2064
Joined: Sun Aug 13, 2017 7:30 pm
### Re: Carleas - Is ILP capitalist-philanthropic or socialist?
Carleas wrote:I basically agree, although the analogy begins to breakdown as we get more literal. To have a true economy, we'd need some kind of currency which could both be earned by actions taken on the site, and spent on features. So, for example, users might get a certain amount of karma upon joining, spend karma both to post and to read others' posts, and receive some part of the karma that others spend to read their posts. Karma could be spend to found new forums or to promote posts or the like.
This is what I was saying, apart from spending karma both to post and to read others' posts - that's an interesting dimension that would mimic the pricing aspect of the capitalist system that is at the heart of the profit-making mechanism. I guess that's just as integral as the private property aspect that I was emphasising, and maybe that was what you were getting at when you said that by itself breaks down the analogy. You are a good and just god-king afterall, it is known.
A fascinating thought experiment for sure, and I wonder what it says about Capitalism, if anything, if it's doubtful whether its modelling would produce the best philosophy upon its application here?
Carleas wrote:I think we actually got to the point where things became unmanageable by moderators, and we need either a greater resource expenditure or a decentralized system. Instead, we failed to deliver either, people got frustrated and left, and we shrank back down to a size that could be managed by moderators.
Which is to say that the causal connection might go the other way: if we implemented the capitalist-like (or, more accurately, market-like) system, we would probably see more growth.
Good point, there were definitely much fewer people around when I returned to this place most recently - perhaps I missed the issue coming to a head, but I think I was certainly around before then to be familiar enough with what you're referring to. Giving them their own forums to bitch about the ones they didn't like would have been "a" solution, though leaving to start their own achieved much the same outcome and without the potential reputation damage that their continued contributions here would have caused in the long term to the forum as a whole. It was something akin to an invasion, and a seemingly expansionist one at that - aiming to replace rather than compete against.
I assume this is what you're talking about, at least? Would you have acted differently now you have the benefit of hindsight?
Carleas wrote:There are some parallels in what you're saying to what has happened with Facebook, Reddit, and Twitter over the past few years. Those platforms grew very rapidly, and experienced problems with moderation, leading to crackdowns followed by large-scale defections and the creation of new independent 'states'. Reddit and Twitter seem to have weathered the storm better, at least in terms of quality of discussion. Facebook used more communist-like central planning in the form of algorithmic moderation, and Reddit used more capitalist-like decentralization in the form of subreddits and karma. Twitter's approach has some lighter moderation plus organic controls of liking/retweeting/unfollowing/muting/blocking. The platforms have other differences, but it does appear that one dimension on which they compete with each other is social policy.
From what I've been hearing, Twitter has been resorting to some more authoritarian policies as of late, as well as Patreon and to some lesser extent Youtube. The problem is that the leaders in their respective specialities have come to resemble monopolies in practice, which I think mimics the trajectory of the capitalist market in general, and when you're denied from a monopoly it's not the same to demote yourself to the much smaller scale competition - thus the capitalist competition theory somewhat fails in practice in this respect. Facebook seems to be falling a bit out of favour, with Instagram holding up better in the picture sharing department at least, so the competition model isn't completely without success. I get the feeling that Reddit is relatively underground, well known but not as openly as Facebook and Twitter for example. I don't actually use most of these platforms so my understanding is somewhat lacking, but not so much that I can't see the parallels and potential sources of inspiration on how to run your own place.
Silhouette
Philosopher
Posts: 3591
Joined: Tue May 20, 2003 1:27 am
Location: Existence
### Re: Carleas - Is ILP capitalist-philanthropic or socialist?
Silhouette wrote:there were definitely much fewer people around when I returned to this place most recently
The decline in forum participation is a function of mobile device popularity, especially where the site software wasn't made accommodative soon enough. One could argue that the facebook and twitter giants stole customers, but I don't buy it since those services existed before smartphones. It boils down to being too difficult to type and read on small devices. ATV and motorcycle forums used to be bustling pre-2012, but are lucky if there are anyone but mods now; just the rogue guy asking for a manual for his bike. Correlations noticed after 2012 were probably coincidental with the device segue.
Facebook seems to be falling a bit out of favour
I heard facebook is for old people. The kids prefer snapchat.
monopoly
I've been thinking a lot lately about Friedman's idea that monopolies fall apart on their own, and what's impressive is that he called the fact that little bitty Kmart would buy Sears way back in 1980.
FWD to 19:49
As a matter of fact, you say "can Sears buy Kmart", but the way Kmart has been growing the question is gonna be can Kmart buy Sears LOL!
Donahue was concerned that the monopolistic Sears might buy Kmart, but Kmart bought Sears in 2004, and now both are on their way out due to Amazon.
There seems to be much truth in what Friedman said. Monopolies are still scary, but so far Friedman has been correct.
Serendipper
Philosopher
Posts: 2064
Joined: Sun Aug 13, 2017 7:30 pm
### Re: Carleas - Is ILP capitalist-philanthropic or socialist?
Serendipper wrote:The decline in forum participation is a function of mobile device popularity
That's a good point - as one reason at least.
Serendipper wrote:I've been thinking a lot lately about Friedman's idea that monopolies fall apart on their own.
Hell, I'd be more than happy if you turned full-on free market Capitalist overnight, or right now even - so long as you had good reason for it. I think we should try out and imbed ourselves in all sorts of different ideologies in good faith, to be sure we are understanding them right.
I have nothing against the theory that monopolies tend to collapse under their own weight and go astray through their own inertia - especially in the face of increasingly changing environments and with the need for change and adaptation.
I do have something against the theory that Capitalism best encourages new adaptation from all sources - and not just from those already with connections and money, and that Capitalism adequately prevents monopolies emerging or even oligopolies collectively dominating the market for too long. There is value in the reliability of brands, and large collections of wealth can still adapt to a certain extent, so there are arguments in favour of what Capitalism encourages at the top end of wealth, but they are not necessarily better arguments. A constant influx of new business is undoubtedly better at adaptation, but where are the equal opportunities when initial conditions are so diverse regardless of natural talent? Capitalists praise natural talent as what they foster under their economic model, but so do I - I want natural talent to succeed, I just don't think Capitalism is optimal for this - and this is not to say that clichés about some black and white strawman opponent to Capitalism are what I am advocating instead!
Consider Neil deGrasse Tyson's experience, mentioned in his latest appearance in Joe Rogan's podcast if not elsewhere by himself or others: that government provides funding for untested ideas the best. Private investors need proof, security, convincing agreements to so generously offer the permission (money) that they happen to legally possess at the time. Unproven hypotheses? Insufficiently tested groundbreaking discoveries? Forget it, unless you're already rich and can fund it yourself... Capitalism takes over once it's safe, and grows the idea beyond its welcome and past the point of its danger. The owners gamble to reap the profits from the developers and the workers themselves - it's all so disproportionate as a system of distribtion of wealth! Do I therefore advocate Maoism or Stalinism? I hope nobody is so retarded as to think so.
To any posters like Pedro, being shocked by or denying the existence of people who don't subscribe to your ideology and by what they say (as above), only lends evidence to the hypothesis that you are hanging around too much in familiar territory and not exploring and familiarising yourself with other territory.
Silhouette
Philosopher
Posts: 3591
Joined: Tue May 20, 2003 1:27 am
Location: Existence
### Re: Carleas - Is ILP capitalist-philanthropic or socialist?
Silhouette wrote:
Serendipper wrote:The decline in forum participation is a function of mobile device popularity
That's a good point - as one reason at least.
Thanks
Serendipper wrote:I've been thinking a lot lately about Friedman's idea that monopolies fall apart on their own.
Hell, I'd be more than happy if you turned full-on free market Capitalist overnight, or right now even - so long as you had good reason for it. I think we should try out and imbed ourselves in all sorts of different ideologies in good faith, to be sure we are understanding them right.
He resonated strongly with democrats in areas:
It can be argued that private charity is insufficient because the benefits from it accrue to people other than those who make the gifts— ... a neighborhood effect. I am distressed by the sight of poverty; I am benefited by its alleviation; but I am benefited equally whether I or someone else pays for its alleviation; the benefits of other people's charity therefore partly accrue to me. To put it differently, we might all of us be willing to contribute to the relief of poverty, provided everyone else did. We might not be willing to contribute the same amount without such assurance. In small communities, public pressure can suffice to realize the proviso even with private charity. In the large impersonal communities that are increasingly coming to dominate our society, it is much more difficult for it to do so.
Suppose one accepts, as I do, this line of reasoning as justifying governmental action to alleviate poverty; to set, as it were, a floor under the standard of life of every person in the community. [While there are questions of how much should be spent and how, the] arrangement that recommends itself on purely mechanical grounds is a negative income tax. ... The advantages of this arrangement are clear. It is directed specifically at the problem of poverty. It gives help in the form most useful to the individual, namely, cash. It is general and could be substituted for the host of special measures now in effect. It makes explicit the cost borne by society. It operates outside the market. Like any other measures to alleviate poverty, it reduces the incentives of those helped to help themselves, but it does not eliminate that incentive entirely, as a system of supplementing incomes up to some fixed minimum would. An extra dollar earned always means more money available for expenditure.
https://en.wikipedia.org/wiki/Milton_Fr ... income_tax
He was an advocate of UBI, essentially.
Drug policy
Friedman also supported libertarian policies such as legalization of drugs and prostitution. During 2005, Friedman and more than 500 other economists advocated discussions regarding the economic benefits of the legalization of marijuana.[97]
Gay rights
Friedman was also a supporter of gay rights.[98] He never specifically supported same-sex marriage, instead saying "I do not believe there should be any discrimination against gays."[99]
Immigration
Friedman favored immigration, saying "legal and illegal immigration has a very positive impact on the U.S. economy."[100]
Looks like a Dem to me.
Capitalists praise natural talent as what they foster under their economic model, but so do I - I want natural talent to succeed, I just don't think Capitalism is optimal for this -
Capitalism rewards luck and the talent of exploitation. The guy who finds ways to exploit the most people is rewarded most.
Serendipper
Philosopher
Posts: 2064
Joined: Sun Aug 13, 2017 7:30 pm
### Re: Carleas - Is ILP capitalist-philanthropic or socialist?
Serendipper wrote:But fees are more like rent, right?
If ILP is a state, then fees are more like a flat per capita tax. I'd say that gives a sort of 'ownership', even if under the law there is none. For ILP-as-service, fees are like rent.
Serendipper wrote:But don't I have that power now? You said speech maximization is your goal, so I could still offer my content-creation as a bargaining chip.
Yes, though I think it's less than it would be if I were profit-maximizing through ads. For one thing, if we were ad funded, lurkers would be just as good as participants in terms of revenue. If you made posts that generated a lot of page views without generating a lot of discussion, that would still translate to increased ad revenue. So I'd not only want you participating, I'd want you tapping your social network to drive traffic this way. Again, that isn't real ownership, but it's strong effective ownership; I'd have a stake in people feeling like they own the site in the same way that people feel like they own their Blogger blogs.
(This is an idiosyncratic meaning of 'ownership' that only applies in the ILP-as-state metaphor, and in the way that citizens own the state. In a literal sense, under US law (and most other countries' law as I understand it), users own their posts and ILP has a license to display them. Disclaimer: IANYL.)
Carleas wrote:the outcomes [for oil-rich states] are generally pretty shitty
Serendipper wrote:Brunei...Norway...
I was referring to resource curse, although I thought that was more widely accepted than it appears to be.
I am surprised to see Venezuela classified as not-socialist. I don't think the definition is necessarily unreasonable, though I would object that our political systems should be defined based on the policies they employ rather than on the outcomes. A laissez-faire economy that results in an equitable distribution doesn't become socialist. I would define Venezuela as more socialist, since state ownership and control of industries is the policy.
Serendipper wrote:You have much faith in people lol. The most sensible and factually accurate posts on zerohedge almost always have the most downvotes and consequently I've arranged for the comments to be displayed starting with the most downvoted.
I agree that it depends on the quality and views of the community, which is a large assumption and one that needs to be revisited regularly. When I dream of unrealizable karma systems, they are weighted so that highly-ranked users have a larger say than lower-ranked users, and staff would lightly manipulate the rankings to guide the outcome (e.g. by boosting quality users' ranks and demoting shite users' ranks).
Serendipper wrote:Stackexchange is essentially like that. You sign up and receive 10 points. After 100 points you get powers to edit questions, improve grammar, etc. After 1000 points you get moderation powers (question deletion, locking, etc).
The model works very well for the original purpose of StackOverflow, i.e. specific technical questions with more or less objective answers. Jeff Atwood is pretty honest about the goal being to gamify the creation of a wiki, and from my experience (coding, troubleshooting ILP's server, using a linux desktop), the result is an invaluable set of answered questions for commonly encountered problems. For a more open ended discussion, and for topics like philosophy where there isn't always a clear right answer, it's a very bad system.
But it is worth noting that, if I do ever get around to bringing ILP up-to-date, it will be by moving to another Atwood project, Discourse. It does have some of StackExchange's karma-based permissions system, which we probably wouldn't use, but the other features are solid, and theory behind the choices is very much what I'd want to see in a replacement for phpBB.
Serendipper wrote:One upgrade I could definitely get behind is to make the site more picture and video friendly. It would be nice to drag n drop and have videos cued instead of asking people to forward to a specific time.
I disagree. Nothing against anyone else's preferences or mode of expression, but I sometimes regret adding the [youtube] tags. I don't want ILP to be an independent venue for Youtube comments, and I find that when I post something and someone responds with a video, I lose all interest. And don't get me started on picture heavy threads, I think I can count the number of times a picture has added anything to a conversation here on one hand.
I recognize that both of these opinions are obnoxiously biased; I know that I've linked to videos and embedded pictures, and it always feels justified when I do it. But truth be told, I prefer to write in a terminal window, my aesthetic is minimalist, and I would rather no videos or pictures than more. I know I'll have to cave on that, but I will never stop complaining about it.
(and, having said the foregoing: you can point to a specific place in a video by including a $$\texttt{&t=###}$$ on the end of the URL, where $$\texttt{###}$$ is the position in seconds in the video. If you click 'Share' below a video, there's a box at the bottom of the pane that says "Start at ___" that will autopopulate with the timestamp you're at when you click it, but you can change it to any other time stamp and it will autogenerate the URL with the right $$\texttt{&t=}$$ value.)
Silhouette wrote:A fascinating thought experiment for sure, and I wonder what it says about Capitalism, if anything, if it's doubtful whether its modelling would produce the best philosophy upon its application here?
Not very much, I think. It's capitalism-within-capitalism, in that there's this competition for attention and production across all websites, and it isn't clear that within that larger game, a website that has a lower-level set of competitive games is going to be the most appealing; a rational actor should choose the site that gives the most for the least work and then leave when the resources are exhausted.
Silhouette wrote:Would you have acted differently now you have the benefit of hindsight?
Definitely, but I probably just would have screwed it up differently. I think Serendipper has a point about mobile, our mobile interface is terrible, and even that was a late addition.
But I also think the internet landscape has changed, people use fewer sites than they used to, they read and write shorter-form contributions and less linearly, they polarize more towards their tribes and avoid people they disagree with. In ILP's heyday, there were a lot of people desperately looking for political and religion and philosophical disagreement in the form of conversations. Now, sites are primarily trying to shut that disagreement down at the behest of users. That's a real cultural change, and a site like ILP doesn't cater to it as well as it did to the longer-form, pro-disagreement culture of the earlier internet. We're a throwback, and that would still be true if we'd done anything short of changing what we're fundamentally about.
User Control Panel > Board preference > Edit display options > Display signatures: No.
Carleas
Magister Ludi
Posts: 5724
Joined: Wed Feb 02, 2005 8:10 pm
Location: Washington DC, USA
### Re: Carleas - Is ILP capitalist-philanthropic or socialist?
Carleas wrote:
Serendipper wrote:But fees are more like rent, right?
If ILP is a state, then fees are more like a flat per capita tax. I'd say that gives a sort of 'ownership', even if under the law there is none. For ILP-as-service, fees are like rent.
But the fee itself doesn't confer ownership. The amount of influence I have in government doesn't change in proportion to the amount of tax I pay. So ILP-as-a-state, I have no control and that would not change by virtue of a tax.
If ILP is a service charging a fee, then I'm purchasing something: citizenship. As a State, I'm a citizen regardless if I pay the tax, but as a Service, I'm a citizen only if I pay the tax.
Serendipper wrote:But don't I have that power now? You said speech maximization is your goal, so I could still offer my content-creation as a bargaining chip.
Yes, though I think it's less than it would be if I were profit-maximizing through ads. For one thing, if we were ad funded, lurkers would be just as good as participants in terms of revenue. If you made posts that generated a lot of page views without generating a lot of discussion, that would still translate to increased ad revenue. So I'd not only want you participating, I'd want you tapping your social network to drive traffic this way. Again, that isn't real ownership, but it's strong effective ownership; I'd have a stake in people feeling like they own the site in the same way that people feel like they own their Blogger blogs.
(This is an idiosyncratic meaning of 'ownership' that only applies in the ILP-as-state metaphor, and in the way that citizens own the state. In a literal sense, under US law (and most other countries' law as I understand it), users own their posts and ILP has a license to display them. Disclaimer: IANYL.)
Interesting. So, what exactly is the "capital" to you? I have some say of the means of production by virtue of the content that I'm producing, but I'm not clear on what the gain is. It's not money, since there are no ads, so what is it? And are you exploiting me to accumulate it?
ILP could only be considered capitalistic if the users are being exploited for some type of "profit" that isn't agreed to by the users. And the leverage used to exploit would be the competition narrative that there aren't better places to go.
Carleas wrote:the outcomes [for oil-rich states] are generally pretty shitty
Serendipper wrote:Brunei...Norway...
I was referring to resource curse, although I thought that was more widely accepted than it appears to be.
I am surprised to see Venezuela classified as not-socialist. I don't think the definition is necessarily unreasonable, though I would object that our political systems should be defined based on the policies they employ rather than on the outcomes. A laissez-faire economy that results in an equitable distribution doesn't become socialist.
I submit that a laissez-faire economy cannot possibly result in an equitable distribution as there is no mechanism for it.
I would define Venezuela as more socialist, since state ownership and control of industries is the policy.
I'm interested in the evidence for that.
No; Venezuela is not a socialist state in the sense of having its government officially and constitutionally bound to socialist construction (this is what a “socialist state” means in the the Marxist-Leninist / Communist sense). At most, a socialist party, the United Socialist Party of Venezuela, held a majority in the National Assembly from 2000 to 2015 and two of the country’s presidents have belonged to this party.
Now let’s turn to the question you probably intended to ask: does Venezuela have a socialist economy?
The answer to this would unequivocally be no. The dynamic of capital accumulation still drives economic activity, most enterprises are privately-owned and profit seeking, the wage-labor relationship is still in place - and even more fundamentally - Venezuela operates in a global capitalist market system.
The government does intervene with the process of capital accumulation and with market processes and does create a negative and uncertain atmosphere for business in the name of fighting corruption and serving the needs of “the people”. But it hasn’t erected a new system to replace capitalism - nor could it accomplish such a monumental task on its own. At most Venezuela is a mixed economy with anti-business government policies that distort markets and retard growth.
But even if it were true that the government owns the means of production, I would simply ask who owns the government. If the people do not own the government that owns the means of production, then it's not socialism, but explotation, ie capitalism.
I still maintain that Venezuela is THE most capitalistic place on the planet; the people have no control over anything and are exploited for profit more than anywhere, as evidenced by all the turmoil in spite of having the world's largest oil reserve.
Chomsky said we impose capitalism on 3rd world countries to destroy them.
Serendipper wrote:You have much faith in people lol. The most sensible and factually accurate posts on zerohedge almost always have the most downvotes and consequently I've arranged for the comments to be displayed starting with the most downvoted.
I agree that it depends on the quality and views of the community, which is a large assumption and one that needs to be revisited regularly. When I dream of unrealizable karma systems, they are weighted so that highly-ranked users have a larger say than lower-ranked users, and staff would lightly manipulate the rankings to guide the outcome (e.g. by boosting quality users' ranks and demoting shite users' ranks).
Oh lordy... the tyranny of the minority lol. A physics guy with 60,000 rep on stack exchange deleted my question about whether a motorcycle sprocket-set of 11/33 or 13/39 would put more power to the ground because he couldn't see past the fact that they're both 1:3 ratios in order to take into account friction and chain weight. I was stunned that he could have accumulated that much reputation while being completely void of open-mindedness, especially to the fact that he might not know everything quite yet. Narcissism is an attribute of the ignorant and where karma accumulate exists, there will be the egotists at the top of the pile dispensing arrogance. I believe intelligence is a function of humility and I don't see how karma points selects for that trait.
Serendipper wrote:Stackexchange is essentially like that. You sign up and receive 10 points. After 100 points you get powers to edit questions, improve grammar, etc. After 1000 points you get moderation powers (question deletion, locking, etc).
The model works very well for the original purpose of StackOverflow, i.e. specific technical questions with more or less objective answers. Jeff Atwood is pretty honest about the goal being to gamify the creation of a wiki, and from my experience (coding, troubleshooting ILP's server, using a linux desktop), the result is an invaluable set of answered questions for commonly encountered problems. For a more open ended discussion, and for topics like philosophy where there isn't always a clear right answer, it's a very bad system.
Yes, I think you nailed it there.
But it is worth noting that, if I do ever get around to bringing ILP up-to-date, it will be by moving to another Atwood project, Discourse. It does have some of StackExchange's karma-based permissions system, which we probably wouldn't use, but the other features are solid, and theory behind the choices is very much what I'd want to see in a replacement for phpBB.
I don't know what that means.
Serendipper wrote:One upgrade I could definitely get behind is to make the site more picture and video friendly. It would be nice to drag n drop and have videos cued instead of asking people to forward to a specific time.
I disagree. Nothing against anyone else's preferences or mode of expression, but I sometimes regret adding the [youtube] tags. I don't want ILP to be an independent venue for Youtube comments, and I find that when I post something and someone responds with a video, I lose all interest. And don't get me started on picture heavy threads, I think I can count the number of times a picture has added anything to a conversation here on one hand.
Yes but anyone can post nonsense whether it be in picture form or text. If you peruse my threads I think you'll agree that pictures and video are essential to my conveyance strategy. And I don't just post a video, but give FWD instructions to specific times and also usually include a transcript. I do everything in my power to get what's in my head into your head, and hindering modes of expression is hindering that conveyance, all for the sake of a few hoodlums that annoy you. It's not worth it.
(and, having said the foregoing: you can point to a specific place in a video by including a $$\texttt{&t=###}$$ on the end of the URL, where $$\texttt{###}$$ is the position in seconds in the video. If you click 'Share' below a video, there's a box at the bottom of the pane that says "Start at ___" that will autopopulate with the timestamp you're at when you click it, but you can change it to any other time stamp and it will autogenerate the URL with the right $$\texttt{&t=}$$ value.)
I tried that in the past and it didn't work. I click share, click the "start at" box, copy the link, post it here and it still displays the video from the beginning, ignoring the starting time. At least, it did when I tried it. Perhaps I'll try again. I'd have much less to complain about if I could post a cued video.
Serendipper
Philosopher
Posts: 2064
Joined: Sun Aug 13, 2017 7:30 pm
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.255979984998703, "perplexity": 2749.6489218847328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202530.49/warc/CC-MAIN-20190321172751-20190321194751-00064.warc.gz"}
|
https://cadabra.science/qa/105/cadabra-2-suppressing-output-font-size?show=138
|
# Cadabra 2: suppressing output, font size
Two basic questions regarding Cadabra 2:
1. In Cadabra 1, if the command is ended with :, then the corresponding output is suppressed. How to do the same in Cadabra 2?
2. Is it possible to change font size at this stage? I'm using Ubuntu.
Do not end it with anything. So
ex:= A+B:
to enter expressions, and then
substitute(ex, $A = C$)
to act with an algorithm without showing output. The logic being that the latter is simply a Python statement, and statements by default do not show output.
by (65.1k points)
+1 vote
Font size can now be changed using the 'Font size' entry in the 'View' menu.
by (65.1k points)
Thanks a lot! Just a small problem - the file in usr/share/applications/Cadabra2.desktop is not getting the icon properly...
That file should have been removed by 'sudo make install', and replaced with cadabra2-gtk.desktop. It didn't?
cadabra2-gtk.desktop does appear in this location: /usr/local/share/applications. But somehow it's not taking the png file and the icon is not appearing. I reinstalled several times.
There is something fundamentally broken in the way freedesktop.org's rules work for icons. Sigh... Have you tried logging out and logging back in?
Yeah, I did it many times - no change. BTW, in my system, all the desktop files are in /usr/share/applications (instead of /usr/local...). Cadabra 1 is also there. I didn't notice where it was for Cadabra 2 before this update, but the icon was working then...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8346899747848511, "perplexity": 4867.2372068316035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683708.93/warc/CC-MAIN-20220707063442-20220707093442-00587.warc.gz"}
|
http://math.stackexchange.com/questions/377121/implicit-differentiation-if-sin-y-2-sin-x-show-fracdydx2-1-3-sec
|
# Implicit differentiation. If $\sin y=2\sin x$, show $(\frac{dy}{dx})^2=1 + 3\sec^2y$
I'm self teaching and stuck on the last question of the exercises on implicit differentiation. It says given that $\sin y=2\sin x$, show $(\frac{dy}{dx})^2=1 + 3\sec^2y$
My workings follow. I differentiate both sides w.r.t $x$, square and rearrange:
$$\cos y \frac{dy}{dx} = 2\cos x \Rightarrow \cos^2y(\frac{dy}{dx})^2 = 4\cos^2x$$
$$\Rightarrow (\frac{dy}{dx})^2 = \frac{4\cos^2x}{\cos^2y}$$
I'm now trying to rearrange the RHS to look like $1 + 3\sec^2y$ but failing. By employing the identity $\cos^2x + sin^2x = 1$and looking at the original equation, I can get to $\cos^2x = 1 - (\dfrac{\sin y}{2})^2$ and end up with
$$(\frac{dy}{dx})^2 = \frac{4(1 - \frac{1}{4}\sin^2y)}{\cos^2y} = \frac{4 - \sin^2y}{\cos^2y}$$
Can someone please put me on the right path? I wonder if I should differentiate both sides of $\cos y \frac{dy}{dx} = 2\cos x$ as that also leads to an equation involving $(\frac{dy}{dx})^2$.
-
You need to put your final denominator in terms of $\cos y$ - then see how it looks. – Mark Bennet Apr 30 '13 at 11:03
Hint: use $4-\sin^2 y = 3+\cos^2 y$.
-
Thanks to your hint, I get the right answer now. $\dfrac{3 + \cos^2y}{\cos^2y} = 1 + 3\sec^2y$ – PeteUK Apr 30 '13 at 11:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9013415575027466, "perplexity": 190.1672760798471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065324.41/warc/CC-MAIN-20150827025425-00288-ip-10-171-96-226.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/statistics/26957-probability.html
|
Math Help - probability
1. probability
There are 5 different colors of balls: white, black, blue, red, green. We randomly pick 6 balls. Each ball has a probability of 0.2 of getting each of the 5 colors. What is the probability that, from the 6 balls picked, there are white balls and black balls?
2. Originally Posted by allrighty
There are 5 different colors of balls: white, black, blue, red, green. We randomly pick 6 balls. Each ball has a probability of 0.2 of getting each of the 5 colors. What is the probability that, from the 6 balls picked, there are white balls and black balls?
find the probabilities for each case and add them up
by (all possible) cases, i mean:
probability of choosing 1 white, 0 black, 1 blue, 1 red, 3 green
probability of choosing 1 white, 0 black, 0 blue, 2 red, 3 green
probability of choosing 1 white, 0 black, 2 blue, 0 red, 3 green
.
.
.
.
or we could do:
1 - probability of choosing no white and no black ball
so all you have to worry about are the number of ways you can choose 6 balls among the three remaining colors. there is a formula for that sort of thing. can't remember right now, i'll have to look it up. but when you do get it, just multiply the answer by 0.2 and that will give you the probability of choosing no white and no black ball
3. Originally Posted by Jhevon
so all you have to worry about are the number of ways you can choose 6 balls among the three remaining colors
Is that right? A white AND a black are required.
I have another method but it's messy.
You could have for example 2W, 1B and 3 others 6!/(3!2!) ways each with probability 0.2^3 0.6^3
or 5W, 1B 6!/5! ways with probability 0.2^6
Having said all that, I'm a bit rubbish at probability so I'll wait and see what people say.
4. Originally Posted by a tutor
Is that right? A white AND a black are required.
yes, i believe so. what i said was i want to find the probability of having 0 white AND 0 black. and then take 1 minus that probability to find the probability of at least 1 white or 1 black. i think what i described does the trick... but then again, i'm a noob when it comes to probability as well
5. Original question said..
Originally Posted by allrighty
What is the probability that, from the 6 balls picked, there are white balls and black balls?
and you said..
Originally Posted by Jhevon
to find the probability of at least 1 white or 1 black.
6. Originally Posted by a tutor
Original question said..
and you said..
ah yes. my bad. i didn't see the "and." i was under the impression if we have either or we were good...
7. Hello, allrighty!
I think I've solved it . . .
There are 5 different colors of balls: white, black, blue, red, green.
We randomly pick 6 balls.
Each ball has a probability of 0.2 of getting each of the 5 colors.
What is the probability that, from the 6 balls picked, there are white balls and black balls?
The opposite of "some White and some Black" is "no White or no Black".
To get no White, we must pick six balls from the other four colors.
. . Then: . $P(\text{0 White}) \:=\:(0.8)^6$
To get no Black, we must pick six balls from the other four colors.
. . Then: . $P(\text{0 Black}) \:=\:(0.8)^6$
To get no White and no Black, we pick six balls from the other 3 colors.
. . Then: . $P(\text{0 White} \,\wedge \,\text{0 Black}) \:=\;(0.6)^6$
Hence: . $P(\text{0 White} \,\vee \,\text{0 Black}) \;=\;P(\text{0 White}) + P(\text{0 Black}) - P(\text{0 White} \,\wedge \,\text{0 Black})$
. . . . . . . . . . . . . . . . . . . . $= \quad\;\;(0.8)^6\quad \;+ \;\quad(0.8)^6 \qquad- \qquad(0.6)^6$
. . . . . . . . . . . . . . . . . . . . $= \qquad 0.4777632$
Therefore: . $P(\text{some White and some Black}) \;=\;1 - 0.4777632 \;=\;\boxed{\:0.522368\:}$
8. A neat solution Soroban.
I got the same answer rather more clumsily.
I wrote a quick easy program to do it the way I mentioned above.
Attached Thumbnails
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8889181017875671, "perplexity": 330.80225365996927}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274289.5/warc/CC-MAIN-20140728011754-00369-ip-10-146-231-18.ec2.internal.warc.gz"}
|
https://solvedlib.com/n/8-points-set-up-an-integral-in-cylindrical-coordinates-for,8727620
|
# (8 points) Set up an integral in cylindrical coordinates for the volume of the region that is theright circular cylinder
###### Question:
(8 points) Set up an integral in cylindrical coordinates for the volume of the region that is theright circular cylinder whose base the circle 2sine the xY plane and whose top lies in the plane
#### Similar Solved Questions
##### 41.0 sin 100t, where Av is in volts and t is in seconds, is applied to...
41.0 sin 100t, where Av is in volts and t is in seconds, is applied to a series RLC circuit with L A sinusoidal voltage Av 155 mH, C 99.0 uF, and R 74.0 (a) What is the impedance of the circuit? (b) What is the maximum current? A. (c) Determine the numerical value for a in the equation/ max sin (at ...
##### QuestionEach sweat shop worker at a computer factory can put together 3 computers per hour on average with standard deviation of 0.8 computers 47 workers are randomly selected to work the next shift at the factory__ Round all answers t0 decimal places where possible and assume normal distribution_What is the distribution of X?X ~What is the distribution of €? %What is the distribution of Er? d. If one randomly selected worker observed; find the probability that this worker will put together be
Question Each sweat shop worker at a computer factory can put together 3 computers per hour on average with standard deviation of 0.8 computers 47 workers are randomly selected to work the next shift at the factory__ Round all answers t0 decimal places where possible and assume normal distribution_ ...
##### Management of Mittel Rhein AG of Köln, Germany, would like to reduce the amount of time...
Management of Mittel Rhein AG of Köln, Germany, would like to reduce the amount of time between when a customer places an order and when the order is shipped. For the first quarter of operations during the current year the following data were reported: Inspection time Wait time (from order to s...
##### H(E), (hcrc h(f)= E+1 t21 5 ) lim L 2 >1 2 (6+3),42)
h(E), (hcrc h(f)= E+1 t21 5 ) lim L 2 >1 2 (6+3),42)...
##### [1 21 4- Find eigenvalues and eignenvectors of A = L3 45- Diagonalize matrix A given in the previous problem:
[1 21 4- Find eigenvalues and eignenvectors of A = L3 4 5- Diagonalize matrix A given in the previous problem:...
##### Find a basis and the dimension of the subspace of R4 spanned by the vectors S-{(1,0,1,2),(1,0,2,4),(1,1,1,1)}.
Find a basis and the dimension of the subspace of R4 spanned by the vectors S-{(1,0,1,2),(1,0,2,4),(1,1,1,1)}....
##### Question (1 point)A particle moves along coordinate axis. Its position at time t is given by s(t) =0 St + 1, t 2 0. Choose ALL the values of tat which the particle is speeding Up.
Question (1 point) A particle moves along coordinate axis. Its position at time t is given by s(t) =0 St + 1, t 2 0. Choose ALL the values of tat which the particle is speeding Up....
##### Labcl thc components of= nuclear power plantElectric powerslcum turbinccontmlmd4EannnucnectCoolantFumpc neeeuamCoolant outMlrmmercnemalornsuecr BankFui'
Labcl thc components of= nuclear power plant Electric power slcum turbinc contmlmd 4Ean nnucnect Coolant Fump c neeeuam Coolant out Mlrmmercnemalor nsuecr Bank Fui'...
##### Practice Problem 07.70b Draw the mechanism for the following reaction: conc. H2SO4 Heat "OH
Practice Problem 07.70b Draw the mechanism for the following reaction: conc. H2SO4 Heat "OH...
##### Determine the dosage for the following drug Express answers to the nearest whole number. the recommended...
determine the dosage for the following drug Express answers to the nearest whole number. the recommended child dosage is 5 - 10 mg/m^2 the BSA is 0.43 m^2...
##### Determine the intervals of constant concavity of the given function, and locate any inflection points. $f(x)=\ln \left(1+x^{2}\right)$
Determine the intervals of constant concavity of the given function, and locate any inflection points. $f(x)=\ln \left(1+x^{2}\right)$...
##### Data for a number of stock indices are provided on the author's website:http://www .rotman.utoronto.ca/ hull Choose an index and test whether a three-standard-deviation down movement happens more often than a three-standard-deviation up movement.
Data for a number of stock indices are provided on the author's website: http://www .rotman.utoronto.ca/ hull Choose an index and test whether a three-standard-deviation down movement happens more often than a three-standard-deviation up movement....
##### A large corporation has 500 employees who are classified below according to their creative skills and...
A large corporation has 500 employees who are classified below according to their creative skills and language skills. (a) If an individual is randomly selected from the corporation, (i) what is the probability that the one selected is highly creative and also has good language skills? (ii) what is...
##### EMPLATE: Basic Concept ACTIVE LEARNING TEMPLATE: Basic Leah Mbatta STUDENT NAME CONCEPT attentement of intends REVIEW...
EMPLATE: Basic Concept ACTIVE LEARNING TEMPLATE: Basic Leah Mbatta STUDENT NAME CONCEPT attentement of intends REVIEW MODULE CILAPTER Nursing Interventions Underlying Principles Related Content Divorce, alcohol ure, drug addiction, poverty THERAPEUTIC PROCEDUR...
##### 1. What do the following Vitamins do when looking at healing? Vitamin A: Vitamin B complex...
1. What do the following Vitamins do when looking at healing? Vitamin A: Vitamin B complex Vitamin C Vitamin D 2.What cells divide consonantly in the Human Body. 3. What organs have the ability to regenerate. 4. What are the acronyms RICE stands for? And when we apply this therapy 5. Black open pore...
##### Given the ideal gas law, PV = nRT, show that (a)-←←=-1 8. (b) From thermodynamics, the...
Given the ideal gas law, PV = nRT, show that (a)-←←=-1 8. (b) From thermodynamics, the relationship between Cp (heat capacity at constant pressure) and C, (heat capacity at constant volume) for an ideal gas is given by: C, ) Simplify the right hand side. (i) Can we conclude that C, > C,...
##### Anntosa1JaeC 4particke1Land charge 0f 10InoreenAhateat10m93m 25m20m
Anntosa1 JaeC 4particke 1Land charge 0f 10 Inore enAhat eat 10m 93m 25m 20m...
##### (Rouna Find the E consumoton = formula macroeconomic theory; decin for € foedr 89 aq 01 polnsse expenditure places JL tolal needed uoipuny function is called VKH pue Use integers 1 1 propuisl consumption (C) alnsuoj 01 da 1 Eizo2 Jipiujjoi marqina propensty consuma ton counin Attom 2004-20092
(Rouna Find the E consumoton = formula macroeconomic theory; decin for € foedr 89 aq 01 polnsse expenditure places JL tolal needed uoipuny function is called VKH pue Use integers 1 1 propuisl consumption (C) alnsuoj 01 da 1 Eizo2 Jipiujjoi marqina propensty consuma ton counin Attom 2004-20092...
##### Words, describe the role of IKK in the induced innate response during In your own infection: (3 sentences minimum)
words, describe the role of IKK in the induced innate response during In your own infection: (3 sentences minimum)...
##### Marginal Cost (dollars) Marginal Physical Total Total Product of Variable Fixed Variable Input Fixed Input Input...
Marginal Cost (dollars) Marginal Physical Total Total Product of Variable Fixed Variable Input Fixed Input Input Variable Cost Cost Output (units) (units) (units) (units) (dollars) (dollars) $500$0 $500$200 $500$400 $500$600 $500$800 $500$1000 Refer to Exhibit 21-3. The average variable cost o...
##### 9_ Fuld the Maclaurin series of In 1 _ l+
9_ Fuld the Maclaurin series of In 1 _ l+...
##### 2. Express each of the following events in terms of the events A, B, and C,...
2. Express each of the following events in terms of the events A, B, and C, and the operations of complementation, union, and intersection: (a) at least one of the events A, B,C occurs; (b) at most one of the events A, B, C occurs; (c) none of the events A, B, C occurs; (d) all three events A, B, C ...
##### Many critics of CSR argue that corporate efforts to improve social welfare should not be voluntary,...
Many critics of CSR argue that corporate efforts to improve social welfare should not be voluntary, but rather be fully regulated and mandated by government law. In David Vogel’s view, should CSR efforts be voluntary or mandatory? Please cite your work....
##### In which of the following areas are the auditors least likely to use the work of...
In which of the following areas are the auditors least likely to use the work of a specialist? Multiple Choice Determining the value of complex financial instruments. Assessing control risk for clients using complex derivatives for hedging. Determination of the existence of a complex financial...
##### 1. Consider the following balanced equation If 4.24x103 grams of Ca CIObaq) reacts with an excess...
1. Consider the following balanced equation If 4.24x103 grams of Ca CIObaq) reacts with an excess of Lipo, aq), and the percent yeld is 63 8%, how many grams of LiClOlaq) will actually be produced? 2.52x10g O1.61x103g O2.36x10 g O2.39x103g 9.25x10g...
##### Methylamine, CH3NH4, is a weak base. A 0.504 M solution of methylamine has equilibrium concentrations of...
Methylamine, CH3NH4, is a weak base. A 0.504 M solution of methylamine has equilibrium concentrations of both CH3NH3^+ and OH^- equal to 0.0150 M. What is the Kb of the base? CH3NH2(aq) + H2O(l) <> CH3NH3^+(aq) + OH^-(aq)...
##### Question 8 (1 point) Construct the requested confidence interval: A U.S. Travel Data Center survey of 500 adults found that 195 said they would take more vacations this year than last year: Construct a 95% confidence interval for the true proportion of adults who said that they will travel more this year:(0.35,0.43)(0.34,0.44)(0.38,0.40)(0.37,0.41)(0.36,0.42)
Question 8 (1 point) Construct the requested confidence interval: A U.S. Travel Data Center survey of 500 adults found that 195 said they would take more vacations this year than last year: Construct a 95% confidence interval for the true proportion of adults who said that they will travel more this...
##### A sum of two equations that represent two ellipse, always gives an equation that represents an ellipse or & circe:Select one: 0 True 0 False
A sum of two equations that represent two ellipse, always gives an equation that represents an ellipse or & circe: Select one: 0 True 0 False...
##### (blij) &6i; 23 JIgulCalculate the molar solubility of Ca(OHJz ina 0.012 M CaClz solution: Ksp 6.5x 10-610-3x11.610-4x 5.410-3x4.510-3*910-3x 23Jul
(blij) &6i; 2 3 JIgul Calculate the molar solubility of Ca(OHJz ina 0.012 M CaClz solution: Ksp 6.5x 10-6 10-3x11.6 10-4x 5.4 10-3x4.5 10-3*9 10-3x 23 Jul...
##### 5. A person has utility function u(x, y) = 100xy + x + 2y. Suppose that...
5. A person has utility function u(x, y) = 100xy + x + 2y. Suppose that the price per unit of x is $2, and that the price per unit of y is$4. The person receives \$1 000 that all has to be spent on the two commodities x and y. Solve the utility maximization problem....
##### In a multiple regression setting,e.g. Y = β0 + β1X1 + β2X2 + ε,where Y is responsevariable, X1 and X2 areexplanatory variables. Collinearity meansA.None of the above.B.Y and X1 are highlycorrelated.C.X1 and X2 arehighly correlated.D.Y and X2 are highlycorrelated.
In a multiple regression setting, e.g. Y = β0 + β1X1 + β2X2 + ε, where Y is response variable, X1 and X2 are explanatory variables. Collinearity means A. None of the above. B. Y and X1 are highly correlated. C. X1 and X2 are highly correlated. D. Y and X2 are highly correlated....
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6245234608650208, "perplexity": 4687.900618184005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00244.warc.gz"}
|
http://www.science.gov/topicpages/c/carlo+photon+transport.html
|
#### Sample records for carlo photon transport
1. The 3-D Monte Carlo neutron and photon transport code MCMG and its algorithms
SciTech Connect
Deng, L.; Hu, Z.; Li, G.; Li, S.; Liu, Z.
2012-07-01
The 3-D Monte Carlo neutron and photon transport parallel code MCMG is developed. A new collision mechanism based on material but not nuclide is added in the code. Geometry cells and surfaces can be dynamically extended. Combination of multigroup and continuous cross-section transport is developed. The multigroup scattering is expansible to P5 and upper scattering is considered. Various multigroup libraries can be easily equipped in the code. The same results with the experiments and the MCNP code are obtained for a series of modes. The speedup of MCMG is a factor of 2-4 relative to the MCNP code in speed. (authors)
2. Macro-step Monte Carlo Methods and their Applications in Proton Radiotherapy and Optical Photon Transport
Jacqmin, Dustin J.
Monte Carlo modeling of radiation transport is considered the gold standard for radiotherapy dose calculations. However, highly accurate Monte Carlo calculations are very time consuming and the use of Monte Carlo dose calculation methods is often not practical in clinical settings. With this in mind, a variation on the Monte Carlo method called macro Monte Carlo (MMC) was developed in the 1990's for electron beam radiotherapy dose calculations. To accelerate the simulation process, the electron MMC method used larger steps-sizes in regions of the simulation geometry where the size of the region was large relative to the size of a typical Monte Carlo step. These large steps were pre-computed using conventional Monte Carlo simulations and stored in a database featuring many step-sizes and materials. The database was loaded into memory by a custom electron MMC code and used to transport electrons quickly through a heterogeneous absorbing geometry. The purpose of this thesis work was to apply the same techniques to proton radiotherapy dose calculation and light propagation Monte Carlo simulations. First, the MMC method was implemented for proton radiotherapy dose calculations. A database composed of pre-computed steps was created using MCNPX for many materials and beam energies. The database was used by a custom proton MMC code called PMMC to transport protons through a heterogeneous absorbing geometry. The PMMC code was tested against MCNPX for a number of different proton beam energies and geometries and proved to be accurate and much more efficient. The MMC method was also implemented for light propagation Monte Carlo simulations. The widely accepted Monte Carlo for multilayered media (MCML) was modified to incorporate the MMC method. The original MCML uses basic scattering and absorption physics to transport optical photons through multilayered geometries. The MMC version of MCML was tested against the original MCML code using a number of different geometries and proved to be just as accurate and more efficient. This work has the potential to accelerate light modeling for both photodynamic therapy and near-infrared spectroscopic imaging.
3. Fast Monte Carlo Electron-Photon Transport Method and Application in Accurate Radiotherapy
Hao, Lijuan; Sun, Guangyao; Zheng, Huaqing; Song, Jing; Chen, Zhenping; Li, Gui
2014-06-01
Monte Carlo (MC) method is the most accurate computational method for dose calculation, but its wide application on clinical accurate radiotherapy is hindered due to its poor speed of converging and long computation time. In the MC dose calculation research, the main task is to speed up computation while high precision is maintained. The purpose of this paper is to enhance the calculation speed of MC method for electron-photon transport with high precision and ultimately to reduce the accurate radiotherapy dose calculation time based on normal computer to the level of several hours, which meets the requirement of clinical dose verification. Based on the existing Super Monte Carlo Simulation Program (SuperMC), developed by FDS Team, a fast MC method for electron-photon coupled transport was presented with focus on two aspects: firstly, through simplifying and optimizing the physical model of the electron-photon transport, the calculation speed was increased with slightly reduction of calculation accuracy; secondly, using a variety of MC calculation acceleration methods, for example, taking use of obtained information in previous calculations to avoid repeat simulation of particles with identical history; applying proper variance reduction techniques to accelerate MC method convergence rate, etc. The fast MC method was tested by a lot of simple physical models and clinical cases included nasopharyngeal carcinoma, peripheral lung tumor, cervical carcinoma, etc. The result shows that the fast MC method for electron-photon transport was fast enough to meet the requirement of clinical accurate radiotherapy dose verification. Later, the method will be applied to the Accurate/Advanced Radiation Therapy System ARTS as a MC dose verification module.
4. Monte Carlo photon transport on vector and parallel superconductors: Final report
SciTech Connect
Martin, W.R.; Nowak, P.F.
1987-09-30
The vectorized Monte Carlo photon transport code VPHOT has been developed for the Cray-1, Cray-XMP, and Cray-2 computers. The effort in the current project was devoted to multitasking the VPHOT code and implement it on the Cray X-MP and Cray-2 parallel-vector supercomputers, examining the robustness of the vectorized algorithm for changes in the physics of the test problems, and evaluating the efficiency of alternative algorithms such as the ''stack-driven'' algorithm of Bobrowicz for possible incorporation into VPHOT. These tasks are discussed in this paper. 4 refs.
5. TART97 a coupled neutron-photon 3-D, combinatorial geometry Monte Carlo transport code
SciTech Connect
Cullen, D.E.
1997-11-22
TART97 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART97 is distributed on CD. This CD contains on- line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and its data riles.
6. COMET-PE as an Alternative to Monte Carlo for Photon and Electron Transport
2014-06-01
Monte Carlo methods are a central component of radiotherapy treatment planning, shielding design, detector modeling, and other applications. Long calculation times, however, can limit the usefulness of these purely stochastic methods. The coarse mesh method for photon and electron transport (COMET-PE) provides an attractive alternative. By combining stochastic pre-computation with a deterministic solver, COMET-PE achieves accuracy comparable to Monte Carlo methods in only a fraction of the time. The method's implementation has been extended to 3D, and in this work, it is validated by comparison to DOSXYZnrc using a photon radiotherapy benchmark. The comparison demonstrates excellent agreement; of the voxels that received more than 10% of the maximum dose, over 97.3% pass a 2% / 2mm acceptance test and over 99.7% pass a 3% / 3mm test. Furthermore, the method is over an order of magnitude faster than DOSXYZnrc and is able to take advantage of both distributed-memory and shared-memory parallel architectures for increased performance.
7. ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.
SciTech Connect
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2008-04-01
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.
8. penORNL: a parallel monte carlo photon and electron transport package using PENELOPE
SciTech Connect
Bekar, Kursat B.; Miller, Thomas Martin; Patton, Bruce W.; Weber, Charles F.
2015-01-01
The parallel Monte Carlo photon and electron transport code package penORNL was developed at Oak Ridge National Laboratory to enable advanced scanning electron microscope (SEM) simulations on high performance computing systems. This paper discusses the implementations, capabilities and parallel performance of the new code package. penORNL uses PENELOPE for its physics calculations and provides all available PENELOPE features to the users, as well as some new features including source definitions specifically developed for SEM simulations, a pulse-height tally capability for detailed simulations of gamma and x-ray detectors, and a modified interaction forcing mechanism to enable accurate energy deposition calculations. The parallel performance of penORNL was extensively tested with several model problems, and very good linear parallel scaling was observed with up to 512 processors. penORNL, along with its new features, will be available for SEM simulations upon completion of the new pulse-height tally implementation.
9. Space applications of the MITS electron-photon Monte Carlo transport code system
SciTech Connect
Kensek, R.P.; Lorence, L.J.; Halbleib, J.A.; Morel, J.E.
1996-07-01
The MITS multigroup/continuous-energy electron-photon Monte Carlo transport code system has matured to the point that it is capable of addressing more realistic three-dimensional adjoint applications. It is first employed to efficiently predict point doses as a function of source energy for simple three-dimensional experimental geometries exposed to simulated uniform isotropic planar sources of monoenergetic electrons up to 4.0 MeV. Results are in very good agreement with experimental data. It is then used to efficiently simulate dose to a detector in a subsystem of a GPS satellite due to its natural electron environment, employing a relatively complex model of the satellite. The capability for survivability analysis of space systems is demonstrated, and results are obtained with and without variance reduction.
10. Multiple processor version of a Monte Carlo code for photon transport in turbid media
Colasanti, Alberto; Guida, Giovanni; Kisslinger, Annamaria; Liuzzi, Raffaele; Quarto, Maria; Riccio, Patrizia; Roberti, Giuseppe; Villani, Fulvia
2000-10-01
Although Monte Carlo (MC) simulations represent an accurate and flexible tool to study the photon transport in strongly scattering media with complex geometrical topologies, they are very often infeasible because of their very high computation times. Parallel computing, in principle very suitable for MC approach because it consists in the repeated application of the same calculations to unrelated and superposing events, offers a possible approach to overcome this problem. It was developed an MC multiple processor code for optical and IR photon transport which was run on the parallel processor computer CRAY-T3E (128 DEC Alpha EV5 nodes, 600 Mflops) at CINECA (Bologna, Italy). The comparison between single processor and multiple processor runs for the same tissue models shows that the parallelization reduces the computation time by a factor of about N , where N is the number of used processors. This means a computation time reduction by a factor ranging from about 10 2 (as in our case where 128 processors are available) up to about 10 3 (with the most powerful parallel computers with 1024 processors). This reduction could make feasible MC simulations till now impracticable. The scaling of the execution time of the parallel code, as a function of the values of the main input parameters, is also evaluated.
11. Application of parallel computing to a Monte Carlo code for photon transport in turbid media
Colasanti, Alberto; Guida, Giovanni; Kisslinger, Annamaria; Liuzzi, Raffaele; Quarto, Maria; Riccio, Patrizia; Roberti, Giuseppe; Villani, Fulvia
1998-12-01
Monte Carlo (MC) simulations of photon transport in turbid media suffer a severe limitation represented by very high execution times in all practical cases. This problem could be approached with the technique of parallel computing, which, in principle, is very suitable for MC simulations because they consist in the repeated application of the same calculations to unrelated and superposing events. For the first time in the field of the optical and IR photon transport, we developed a MC parallel code running on the parallel processor computer CRAY-T3E (128 DEC Alpha EV5 nodes, 600 Mflops) at CINECA (Bologna, Italy). The comparison of several single processor runs (on Alpha AXP DEC 2100) and N-processor runs (on Cray T3E) for the same tissue models shows that the computation time is reduced by a factor of about 5*N, where N is the number of used processors. This means a computation time reduction by a factor ranging from about 102 (as in our case) up to about 5*103 (with the most powerful parallel computers) that could make feasible MC simulations till now impracticable.
12. A Coupled Neutron-Photon 3-D Combinatorial Geometry Monte Carlo Transport Code
Energy Science and Technology Software Center (ESTSC)
1998-06-12
TART97 is a coupled neutron-photon, 3 dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly fast: if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system canmore » save you a great deal of time and energy. TART 97 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and ist data files.« less
13. A method for photon beam Monte Carlo multileaf collimator particle transport
Siebers, Jeffrey V.; Keall, Paul J.; Kim, Jong Oh; Mohan, Radhe
2002-09-01
Monte Carlo (MC) algorithms are recognized as the most accurate methodology for patient dose assessment. For intensity-modulated radiation therapy (IMRT) delivered with dynamic multileaf collimators (DMLCs), accurate dose calculation, even with MC, is challenging. Accurate IMRT MC dose calculations require inclusion of the moving MLC in the MC simulation. Due to its complex geometry, full transport through the MLC can be time consuming. The aim of this work was to develop an MLC model for photon beam MC IMRT dose computations. The basis of the MC MLC model is that the complex MLC geometry can be separated into simple geometric regions, each of which readily lends itself to simplified radiation transport. For photons, only attenuation and first Compton scatter interactions are considered. The amount of attenuation material an individual particle encounters while traversing the entire MLC is determined by adding the individual amounts from each of the simplified geometric regions. Compton scatter is sampled based upon the total thickness traversed. Pair production and electron interactions (scattering and bremsstrahlung) within the MLC are ignored. The MLC model was tested for 6 MV and 18 MV photon beams by comparing it with measurements and MC simulations that incorporate the full physics and geometry for fields blocked by the MLC and with measurements for fields with the maximum possible tongue-and-groove and tongue-or-groove effects, for static test cases and for sliding windows of various widths. The MLC model predicts the field size dependence of the MLC leakage radiation within 0.1% of the open-field dose. The entrance dose and beam hardening behind a closed MLC are predicted within +/-1% or 1 mm. Dose undulations due to differences in inter- and intra-leaf leakage are also correctly predicted. The MC MLC model predicts leaf-edge tongue-and-groove dose effect within +/-1% or 1 mm for 95% of the points compared at 6 MV and 88% of the points compared at 18 MV. The dose through a static leaf tip is also predicted generally within +/-1% or 1 mm. Tests with sliding windows of various widths confirm the accuracy of the MLC model for dynamic delivery and indicate that accounting for a slight leaf position error (0.008 cm for our MLC) will improve the accuracy of the model. The MLC model developed is applicable to both dynamic MLC and segmental MLC IMRT beam delivery and will be useful for patient IMRT dose calculations, pre-treatment verification of IMRT delivery and IMRT portal dose transmission dosimetry.
14. Parallel Monte Carlo Electron and Photon Transport Simulation Code (PMCEPT code)
Kum, Oyeon
2004-11-01
Simulations for customized cancer radiation treatment planning for each patient are very useful for both patient and doctor. These simulations can be used to find the most effective treatment with the least possible dose to the patient. This typical system, so called Doctor by Information Technology", will be useful to provide high quality medical services everywhere. However, the large amount of computing time required by the well-known general purpose Monte Carlo(MC) codes has prevented their use for routine dose distribution calculations for a customized radiation treatment planning. The optimal solution to provide accurate" dose distribution within an acceptable" time limit is to develop a parallel simulation algorithm on a beowulf PC cluster because it is the most accurate, efficient, and economic. I developed parallel MC electron and photon transport simulation code based on the standard MPI message passing interface. This algorithm solved the main difficulty of the parallel MC simulation (overlapped random number series in the different processors) using multiple random number seeds. The parallel results agreed well with the serial ones. The parallel efficiency approached 100% as was expected.
15. ITS Version 3.0: The Integrated TIGER Series of coupled electron/photon Monte Carlo transport codes
SciTech Connect
Halbleib, J.A.; Kensek, R.P.; Valdez, G.D.; Mehlhorn, T.A.; Seltzer, S.M.; Berger, M.J.
1993-06-01
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields. It combines operational simplicity and physical accuracy in order to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Flexibility of construction permits tailoring of the codes to specific applications and extension of code capabilities to more complex applications through simple update procedures.
16. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit
SciTech Connect
2009-11-15
Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.
17. ITS version 5.0 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.
SciTech Connect
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2004-06-01
ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.
18. Development of parallel monte carlo electron and photon transport (PMCEPT) code III: Applications to medical radiation physics
Kum, Oyeon; Han, Youngyih; Jeong, Hae Sun
2012-05-01
Minimizing the differences between dose distributions calculated at the treatment planning stage and those delivered to the patient is an essential requirement for successful radiotheraphy. Accurate calculation of dose distributions in the treatment planning process is important and can be done only by using a Monte Carlo calculation of particle transport. In this paper, we perform a further validation of our previously developed parallel Monte Carlo electron and photon transport (PMCEPT) code [Kum and Lee, J. Korean Phys. Soc. 47, 716 (2005) and Kim and Kum, J. Korean Phys. Soc. 49, 1640 (2006)] for applications to clinical radiation problems. A linear accelerator, Siemens' Primus 6 MV, was modeled and commissioned. A thorough validation includes both small fields, closely related to the intensity modulated radiation treatment (IMRT), and large fields. Two-dimensional comparisons with film measurements were also performed. The PMCEPT results, in general, agreed well with the measured data within a maximum error of about 2%. However, considering the experimental errors, the PMCEPT results can provide the gold standard of dose distributions for radiotherapy. The computing time was also much faster, compared to that needed for experiments, although it is still a bottleneck for direct applications to the daily routine treatment planning procedure.
19. Monte Carlo electron-photon transport using GPUs as an accelerator: Results for a water-aluminum-water phantom
SciTech Connect
Su, L.; Du, X.; Liu, T.; Xu, X. G.
2013-07-01
An electron-photon coupled Monte Carlo code ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - is being developed at Rensselaer Polytechnic Institute as a software test bed for emerging heterogeneous high performance computers that utilize accelerators such as GPUs. In this paper, the preliminary results of code development and testing are presented. The electron transport in media was modeled using the class-II condensed history method. The electron energy considered ranges from a few hundred keV to 30 MeV. Moller scattering and bremsstrahlung processes above a preset energy were explicitly modeled. Energy loss below that threshold was accounted for using the Continuously Slowing Down Approximation (CSDA). Photon transport was dealt with using the delta tracking method. Photoelectric effect, Compton scattering and pair production were modeled. Voxelised geometry was supported. A serial ARHCHER-CPU was first written in C++. The code was then ported to the GPU platform using CUDA C. The hardware involved a desktop PC with an Intel Xeon X5660 CPU and six NVIDIA Tesla M2090 GPUs. ARHCHER was tested for a case of 20 MeV electron beam incident perpendicularly on a water-aluminum-water phantom. The depth and lateral dose profiles were found to agree with results obtained from well tested MC codes. Using six GPU cards, 6x10{sup 6} histories of electrons were simulated within 2 seconds. In comparison, the same case running the EGSnrc and MCNPX codes required 1645 seconds and 9213 seconds, respectively, on a CPU with a single core used. (authors)
20. Development and Implementation of Photonuclear Cross-Section Data for Mutually Coupled Neutron-Photon Transport Calculations in the Monte Carlo N-Particle (MCNP) Radiation Transport Code
SciTech Connect
Morgan C. White
2000-07-01
The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V&V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second, the ability to calculate radiation dose due to the neutron environment around a MEA is shown. An uncertainty of a factor of three in the MEA calculations is shown to be due to uncertainties in the geometry modeling. It is believed that the methodology is sound and that good agreement between simulation and experiment has been demonstrated.
1. A Monte Carlo study of high-energy photon transport in matter: application for multiple scattering investigation in Compton spectroscopy
PubMed Central
Brancewicz, Marek; Itou, Masayoshi; Sakurai, Yoshiharu
2016-01-01
The first results of multiple scattering simulations of polarized high-energy X-rays for Compton experiments using a new Monte Carlo program, MUSCAT, are presented. The program is developed to follow the restrictions of real experimental geometries. The new simulation algorithm uses not only well known photon splitting and interaction forcing methods but it is also upgraded with the new propagation separation method and highly vectorized. In this paper, a detailed description of the new simulation algorithm is given. The code is verified by comparison with the previous experimental and simulation results by the ESRF group and new restricted geometry experiments carried out at SPring-8. PMID:26698070
2. A Monte Carlo study of high-energy photon transport in matter: application for multiple scattering investigation in Compton spectroscopy.
PubMed
Brancewicz, Marek; Itou, Masayoshi; Sakurai, Yoshiharu
2016-01-01
The first results of multiple scattering simulations of polarized high-energy X-rays for Compton experiments using a new Monte Carlo program, MUSCAT, are presented. The program is developed to follow the restrictions of real experimental geometries. The new simulation algorithm uses not only well known photon splitting and interaction forcing methods but it is also upgraded with the new propagation separation method and highly vectorized. In this paper, a detailed description of the new simulation algorithm is given. The code is verified by comparison with the previous experimental and simulation results by the ESRF group and new restricted geometry experiments carried out at SPring-8. PMID:26698070
3. The MC21 Monte Carlo Transport Code
SciTech Connect
Sutton TM, Donovan TJ, Trumbull TH, Dobreff PS, Caro E, Griesheimer DP, Tyburski LJ, Carpenter DC, Joo H
2007-01-09
MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities.
4. Simulation of the full-core pin-model by JMCT Monte Carlo neutron-photon transport code
SciTech Connect
Li, D.; Li, G.; Zhang, B.; Shu, L.; Shangguan, D.; Ma, Y.; Hu, Z.
2013-07-01
Since the large numbers of cells over a million, the tallies over a hundred million and the particle histories over ten billion, the simulation of the full-core pin-by-pin model has become a real challenge for the computers and the computational methods. On the other hand, the basic memory of the model has exceeded the limit of a single CPU, so the spatial domain and data decomposition must be considered. JMCT (J Monte Carlo Transport code) has successful fulfilled the simulation of the full-core pin-by-pin model by the domain decomposition and the nested parallel computation. The k{sub eff} and flux of each cell are obtained. (authors)
5. ScintSim1: A new Monte Carlo simulation code for transport of optical photons in 2D arrays of scintillation detectors.
PubMed
Mosleh-Shirazi, Mohammad Amin; Zarrini-Monfared, Zinat; Karbasi, Sareh; Zamani, Ali
2014-01-01
Two-dimensional (2D) arrays of thick segmented scintillators are of interest as X-ray detectors for both 2D and 3D image-guided radiotherapy (IGRT). Their detection process involves ionizing radiation energy deposition followed by production and transport of optical photons. Only a very limited number of optical Monte Carlo simulation models exist, which has limited the number of modeling studies that have considered both stages of the detection process. We present ScintSim1, an in-house optical Monte Carlo simulation code for 2D arrays of scintillation crystals, developed in the MATLAB programming environment. The code was rewritten and revised based on an existing program for single-element detectors, with the additional capability to model 2D arrays of elements with configurable dimensions, material, etc., The code generates and follows each optical photon history through the detector element (and, in case of cross-talk, the surrounding ones) until it reaches a configurable receptor, or is attenuated. The new model was verified by testing against relevant theoretically known behaviors or quantities and the results of a validated single-element model. For both sets of comparisons, the discrepancies in the calculated quantities were all <1%. The results validate the accuracy of the new code, which is a useful tool in scintillation detector optimization. PMID:24600168
6. ScintSim1: A new Monte Carlo simulation code for transport of optical photons in 2D arrays of scintillation detectors
PubMed Central
Mosleh-Shirazi, Mohammad Amin; Zarrini-Monfared, Zinat; Karbasi, Sareh; Zamani, Ali
2014-01-01
Two-dimensional (2D) arrays of thick segmented scintillators are of interest as X-ray detectors for both 2D and 3D image-guided radiotherapy (IGRT). Their detection process involves ionizing radiation energy deposition followed by production and transport of optical photons. Only a very limited number of optical Monte Carlo simulation models exist, which has limited the number of modeling studies that have considered both stages of the detection process. We present ScintSim1, an in-house optical Monte Carlo simulation code for 2D arrays of scintillation crystals, developed in the MATLAB programming environment. The code was rewritten and revised based on an existing program for single-element detectors, with the additional capability to model 2D arrays of elements with configurable dimensions, material, etc., The code generates and follows each optical photon history through the detector element (and, in case of cross-talk, the surrounding ones) until it reaches a configurable receptor, or is attenuated. The new model was verified by testing against relevant theoretically known behaviors or quantities and the results of a validated single-element model. For both sets of comparisons, the discrepancies in the calculated quantities were all <1%. The results validate the accuracy of the new code, which is a useful tool in scintillation detector optimization. PMID:24600168
7. Monte Carlo simulations incorporating Mie calculations of light transport in tissue phantoms: Examination of photon sampling volumes for endoscopically compatible fiber optic probes
SciTech Connect
Mourant, J.R.; Hielscher, A.H.; Bigio, I.J.
1996-04-01
Details of the interaction of photons with tissue phantoms are elucidated using Monte Carlo simulations. In particular, photon sampling volumes and photon pathlengths are determined for a variety of scattering and absorption parameters. The Monte Carlo simulations are specifically designed to model light delivery and collection geometries relevant to clinical applications of optical biopsy techniques. The Monte Carlo simulations assume that light is delivered and collected by two, nearly-adjacent optical fibers and take into account the numerical aperture of the fibers as well as reflectance and refraction at interfaces between different media. To determine the validity of the Monte Carlo simulations for modeling the interactions between the photons and the tissue phantom in these geometries, the simulations were compared to measurements of aqueous suspensions of polystyrene microspheres in the wavelength range 450-750 nm.
8. ITS version 5.0 :the integrated TIGER series of coupled electron/Photon monte carlo transport codes with CAD geometry.
SciTech Connect
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2005-09-01
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.
9. Integrated Tiger Series of electron/photon Monte Carlo transport codes: a user's guide for use on IBM mainframes
SciTech Connect
Kirk, B.L.
1985-12-01
The ITS (Integrated Tiger Series) Monte Carlo code package developed at Sandia National Laboratories and distributed as CCC-467/ITS by the Radiation Shielding Information Center (RSIC) at Oak Ridge National Laboratory (ORNL) consists of eight codes - the standard codes, TIGER, CYLTRAN, ACCEPT; the P-codes, TIGERP, CYLTRANP, ACCEPTP; and the M-codes ACCEPTM, CYLTRANM. The codes have been adapted to run on the IBM 3081, VAX 11/780, CDC-7600, and Cray 1 with the use of the update emulator UPEML. This manual should serve as a guide to a user running the codes on IBM computers having 370 architecture. The cases listed were tested on the IBM 3033, under the MVS operating system using the VS Fortran Level 1.3.1 compiler.
10. RCPO1 - A Monte Carlo program for solving neutron and photon transport problems in three dimensional geometry with detailed energy description and depletion capability
SciTech Connect
Ondis, L.A., II; Tyburski, L.J.; Moskowitz, B.S.
2000-03-01
The RCP01 Monte Carlo program is used to analyze many geometries of interest in nuclear design and analysis of light water moderated reactors such as the core in its pressure vessel with complex piping arrangement, fuel storage arrays, shipping and container arrangements, and neutron detector configurations. Written in FORTRAN and in use on a variety of computers, it is capable of estimating steady state neutron or photon reaction rates and neutron multiplication factors. The energy range covered in neutron calculations is that relevant to the fission process and subsequent slowing-down and thermalization, i.e., 20 MeV to 0 eV. The same energy range is covered for photon calculations.
11. THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE
SciTech Connect
WATERS, LAURIE S.; MCKINNEY, GREGG W.; DURKEE, JOE W.; FENSIN, MICHAEL L.; JAMES, MICHAEL R.; JOHNS, RUSSELL C.; PELOWITZ, DENISE B.
2007-01-10
MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.
12. Monte Carlo Simulation of Transport
Kuhl, Nelson M.
1996-11-01
This paper is concerned with the problem of transport in controlled nuclear fusion as it applies to confinement in a tokamak or stellarator. Numerical experiments validate a mathematical model of Paul R. Garabedian in which the electric potential is determined by quasineutrality because of singular perturbation of the Poisson equation. The Monte Carlo method is used to solve a test particle drift kinetic equation. The collision operator drives the distribution function in velocity space towards the normal distribution, or Maxwellian, as suggested by the central limit theorem. The detailed structure of the collision operator and the role of conservation of momentum are investigated. Exponential decay of expected values allows the computation of the confinement times of both ions and electrons. Three-dimensional perturbations in the electromagnetic field model the anomalous transport of electrons and simulate the turbulent behavior that is presumably triggered by the displacement current. Comparison with experimental data and derivation of scaling laws are presented.
13. Monte Carlo simulation of transport
SciTech Connect
Kuhl, N.M.
1996-11-01
This paper is concerned with the problem of transport in controlled nuclear fusion as it applies to confinement in a tokamak or stellarator. Numerical experiments validate a mathematical model of Paul R. Garabedian in which the electric potential is determined by quasineutrality because of singular perturbation of the Poisson equation. The Monte Carlo method is used to solve a test particle drift kinetic equation. The collision operator drives the distribution function in velocity space towards the normal distribution, or Maxwellian, as suggested by the central limit theorem. The detailed structure of the collision operator and the role of conservation of momentum are investigated. Exponential decay of expected values allows the computation of the confinement times of both ions and electrons. Three-dimensional perturbations in the electromagnetic field model the anomalous transport of electrons and simulate the turbulent behavior that is presumably triggered by the displacement current. Comparison with experimental data and derivation of scaling laws are presented. 13 refs., 6 figs.
14. Photon transport in binary photonic lattices
Rodrguez-Lara, B. M.; Moya-Cessa, H.
2013-03-01
We present a review of the mathematical methods that are used to theoretically study classical propagation and quantum transport in arrays of coupled photonic waveguides. We focus on analyzing two types of binary photonic lattices: those where either self-energies or couplings alternate. For didactic reasons, we split the analysis into classical propagation and quantum transport, but all methods can be implemented, mutatis mutandis, in a given case. On the classical side, we use coupled mode theory and present an operator approach to the Floquet-Bloch theory in order to study the propagation of a classical electromagnetic field in two particular infinite binary lattices. On the quantum side, we study the transport of photons in equivalent finite and infinite binary lattices by coupled mode theory and linear algebra methods involving orthogonal polynomials. Curiously, the dynamics of finite size binary lattices can be expressed as the roots and functions of Fibonacci polynomials.
15. Automated Monte Carlo biasing for photon-generated electrons near surfaces.
SciTech Connect
Franke, Brian Claude; Crawford, Martin James; Kensek, Ronald Patrick
2009-09-01
This report describes efforts to automate the biasing of coupled electron-photon Monte Carlo particle transport calculations. The approach was based on weight-windows biasing. Weight-window settings were determined using adjoint-flux Monte Carlo calculations. A variety of algorithms were investigated for adaptivity of the Monte Carlo tallies. Tree data structures were used to investigate spatial partitioning. Functional-expansion tallies were used to investigate higher-order spatial representations.
16. Improved geometry representations for Monte Carlo radiation transport.
SciTech Connect
Martin, Matthew Ryan
2004-08-01
ITS (Integrated Tiger Series) permits a state-of-the-art Monte Carlo solution of linear time-integrated coupled electron/photon radiation transport problems with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. ITS allows designers to predict product performance in radiation environments.
17. Recent advances in the Mercury Monte Carlo particle transport code
SciTech Connect
Brantley, P. S.; Dawson, S. A.; McKinley, M. S.; O'Brien, M. J.; Stevens, D. E.; Beck, B. R.; Jurgenson, E. D.; Ebbers, C. A.; Hall, J. M.
2013-07-01
We review recent physics and computational science advances in the Mercury Monte Carlo particle transport code under development at Lawrence Livermore National Laboratory. We describe recent efforts to enable a nuclear resonance fluorescence capability in the Mercury photon transport. We also describe recent work to implement a probability of extinction capability into Mercury. We review the results of current parallel scaling and threading efforts that enable the code to run on millions of MPI processes. (authors)
18. Photon dose calculation incorporating explicit electron transport.
PubMed
Yu, C X; Mackie, T R; Wong, J W
1995-07-01
Significant advances have been made in recent years to improve photon dose calculation. However, accurate prediction of dose perturbation effects near the interfaces of different media, where charged particle equilibrium is not established, remain unsolved. Furthermore, changes in atomic number, which affect the multiple Coulomb scattering of the secondary electrons, are not accounted for by current photon dose calculation algorithms. As local interface effects are mainly due to the perturbation of secondary electrons, a photon-electron cascade model is proposed which incorporates explicit electron transport in the calculation of the primary photon dose component in heterogeneous media. The primary photon beam is treated as the source of many electron pencil beams. The latter are transported using the Fermi-Eyges theory. The scattered photon dose contribution is calculated with the dose spread array [T.R. Mackie, J.W. Scrimger, and J.J. Battista, Med. Phys. 12, 188-196 (1985)] approach. Comparisons of the calculation with Monte Carlo simulation and TLD measurements show good agreement for positions near the polystyrene-aluminum interfaces. PMID:7565390
19. Implict Monte Carlo Radiation Transport Simulations of Four Test Problems
SciTech Connect
Gentile, N
2007-08-01
Radiation transport codes, like almost all codes, are difficult to develop and debug. It is helpful to have small, easy to run test problems with known answers to use in development and debugging. It is also prudent to re-run test problems periodically during development to ensure that previous code capabilities have not been lost. We describe four radiation transport test problems with analytic or approximate analytic answers. These test problems are suitable for use in debugging and testing radiation transport codes. We also give results of simulations of these test problems performed with an Implicit Monte Carlo photonics code.
20. Evaluation of bremsstrahlung contribution to photon transport in coupled photon-electron problems
Fernández, Jorge E.; Scot, Viviana; Di Giulio, Eugenio; Salvat, Francesc
2015-11-01
The most accurate description of the radiation field in x-ray spectrometry requires the modeling of coupled photon-electron transport. Compton scattering and the photoelectric effect actually produce electrons as secondary particles which contribute to the photon field through conversion mechanisms like bremsstrahlung (which produces a continuous photon energy spectrum) and inner-shell impact ionization (ISII) (which gives characteristic lines). The solution of the coupled problem is time consuming because the electrons interact continuously and therefore, the number of electron collisions to be considered is always very high. This complex problem is frequently simplified by neglecting the contributions of the secondary electrons. Recent works (Fernández et al., 2013; Fernández et al., 2014) have shown the possibility to include a separately computed coupled photon-electron contribution like ISII in a photon calculation for improving such a crude approximation while preserving the speed of the pure photon transport model. By means of a similar approach and the Monte Carlo code PENELOPE (coupled photon-electron Monte Carlo), the bremsstrahlung contribution is characterized in this work. The angular distribution of the photons due to bremsstrahlung can be safely considered as isotropic, with the point of emission located at the same place of the photon collision. A new photon kernel describing the bremsstrahlung contribution is introduced: it can be included in photon transport codes (deterministic or Monte Carlo) with a minimal effort. A data library to describe the energy dependence of the bremsstrahlung emission has been generated for all elements Z=1-92 in the energy range 1-150 keV. The bremsstrahlung energy distribution for an arbitrary energy is obtained by interpolating in the database. A comparison between a PENELOPE direct simulation and the interpolated distribution using the data base shows an almost perfect agreement. The use of the data base increases the calculation speed by several magnitude orders.
1. A NEW MONTE CARLO METHOD FOR TIME-DEPENDENT NEUTRINO RADIATION TRANSPORT
SciTech Connect
Abdikamalov, Ernazar; Ott, Christian D.; O'Connor, Evan; Burrows, Adam; Dolence, Joshua C.; Loeffler, Frank; Schnetter, Erik
2012-08-20
Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck and Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.
2. MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations
SciTech Connect
Forster, R.A.; Little, R.C.; Briesmeister, J.F.
1989-01-01
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.
3. Applications of the Monte Carlo radiation transport toolkit at LLNL
Sale, Kenneth E.; Bergstrom, Paul M., Jr.; Buck, Richard M.; Cullen, Dermot; Fujino, D.; Hartmann-Siantar, Christine
1999-09-01
Modern Monte Carlo radiation transport codes can be applied to model most applications of radiation, from optical to TeV photons, from thermal neutrons to heavy ions. Simulations can include any desired level of detail in three-dimensional geometries using the right level of detail in the reaction physics. The technology areas to which we have applied these codes include medical applications, defense, safety and security programs, nuclear safeguards and industrial and research system design and control. The main reason such applications are interesting is that by using these tools substantial savings of time and effort (i.e. money) can be realized. In addition it is possible to separate out and investigate computationally effects which can not be isolated and studied in experiments. In model calculations, just as in real life, one must take care in order to get the correct answer to the right question. Advancing computing technology allows extensions of Monte Carlo applications in two directions. First, as computers become more powerful more problems can be accurately modeled. Second, as computing power becomes cheaper Monte Carlo methods become accessible more widely. An overview of the set of Monte Carlo radiation transport tools in use a LLNL will be presented along with a few examples of applications and future directions.
4. SABRINA - An interactive geometry modeler for MCNP (Monte Carlo Neutron Photon)
SciTech Connect
West, J.T.; Murphy, J.
1988-01-01
SABRINA is an interactive three-dimensional geometry modeler developed to produce complicated models for the Los Alamos Monte Carlo Neutron Photon program MCNP. SABRINA produces line drawings and color-shaded drawings for a wide variety of interactive graphics terminals. It is used as a geometry preprocessor in model development and as a Monte Carlo particle-track postprocessor in the visualization of complicated particle transport problem. SABRINA is written in Fortran 77 and is based on the Los Alamos Common Graphics System, CGS. 5 refs., 2 figs.
5. Parallel and Portable Monte Carlo Particle Transport
Lee, S. R.; Cummings, J. C.; Nolen, S. D.; Keen, N. D.
1997-08-01
We have developed a multi-group, Monte Carlo neutron transport code in C++ using object-oriented methods and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and ? eigenvalues of the neutron transport equation on a rectilinear computational mesh. It is portable to and runs in parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities are discussed, along with physics and performance results for several test problems on a variety of hardware, including all three Accelerated Strategic Computing Initiative (ASCI) platforms. Current parallel performance indicates the ability to compute ?-eigenvalues in seconds or minutes rather than days or weeks. Current and future work on the implementation of a general transport physics framework (TPF) is also described. This TPF employs modern C++ programming techniques to provide simplified user interfaces, generic STL-style programming, and compile-time performance optimization. Physics capabilities of the TPF will be extended to include continuous energy treatments, implicit Monte Carlo algorithms, and a variety of convergence acceleration techniques such as importance combing.
6. Monte Carlo method for photon heating using temperature-dependent optical properties.
PubMed
2015-02-01
The Monte Carlo method for photon transport is often used to predict the volumetric heating that an optical source will induce inside a tissue or material. This method relies on constant (with respect to temperature) optical properties, specifically the coefficients of scattering and absorption. In reality, optical coefficients are typically temperature-dependent, leading to error in simulation results. The purpose of this study is to develop a method that can incorporate variable properties and accurately simulate systems where the temperature will greatly vary, such as in the case of laser-thawing of frozen tissues. A numerical simulation was developed that utilizes the Monte Carlo method for photon transport to simulate the thermal response of a system that allows temperature-dependent optical and thermal properties. This was done by combining traditional Monte Carlo photon transport with a heat transfer simulation to provide a feedback loop that selects local properties based on current temperatures, for each moment in time. Additionally, photon steps are segmented to accurately obtain path lengths within a homogenous (but not isothermal) material. Validation of the simulation was done using comparisons to established Monte Carlo simulations using constant properties, and a comparison to the Beer-Lambert law for temperature-variable properties. The simulation is able to accurately predict the thermal response of a system whose properties can vary with temperature. The difference in results between variable-property and constant property methods for the representative system of laser-heated silicon can become larger than 100K. This simulation will return more accurate results of optical irradiation absorption in a material which undergoes a large change in temperature. This increased accuracy in simulated results leads to better thermal predictions in living tissues and can provide enhanced planning and improved experimental and procedural outcomes. PMID:25488656
7. Energy Modulated Photon Radiotherapy: A Monte Carlo Feasibility Study
PubMed Central
Zhang, Ying; Feng, Yuanming; Ming, Xin
2016-01-01
A novel treatment modality termed energy modulated photon radiotherapy (EMXRT) was investigated. The first step of EMXRT was to determine beam energy for each gantry angle/anatomy configuration from a pool of photon energy beams (2 to 10 MV) with a newly developed energy selector. An inverse planning system using gradient search algorithm was then employed to optimize photon beam intensity of various beam energies based on presimulated Monte Carlo pencil beam dose distributions in patient anatomy. Finally, 3D dose distributions in six patients of different tumor sites were simulated with Monte Carlo method and compared between EMXRT plans and clinical IMRT plans. Compared to current IMRT technique, the proposed EMXRT method could offer a better paradigm for the radiotherapy of lung cancers and pediatric brain tumors in terms of normal tissue sparing and integral dose. For prostate, head and neck, spine, and thyroid lesions, the EMXRT plans were generally comparable to the IMRT plans. Our feasibility study indicated that lower energy (<6 MV) photon beams could be considered in modern radiotherapy treatment planning to achieve a more personalized care for individual patient with dosimetric gains. PMID:26977413
8. Monte Carlo simulation for the transport beamline
SciTech Connect
Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A.; Attili, A.; Marchetto, F.; Russo, G.; Cirrone, G. A. P.; Schillaci, F.; Scuderi, V.; Carpinelli, M.
2013-07-26
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.
9. Monte Carlo simulation for the transport beamline
Romano, F.; Attili, A.; Cirrone, G. A. P.; Carpinelli, M.; Cuttone, G.; Jia, S. B.; Marchetto, F.; Russo, G.; Schillaci, F.; Scuderi, V.; Tramontana, A.; Varisano, A.
2013-07-01
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.
10. Vertical Photon Transport in Cloud Remote Sensing Problems
NASA Technical Reports Server (NTRS)
Platnick, S.
1999-01-01
Photon transport in plane-parallel, vertically inhomogeneous clouds is investigated and applied to cloud remote sensing techniques that use solar reflectance or transmittance measurements for retrieving droplet effective radius. Transport is couched in terms of weighting functions which approximate the relative contribution of individual layers to the overall retrieval. Two vertical weightings are investigated, including one based on the average number of scatterings encountered by reflected and transmitted photons in any given layer. A simpler vertical weighting based on the maximum penetration of reflected photons proves useful for solar reflectance measurements. These weighting functions are highly dependent on droplet absorption and solar/viewing geometry. A superposition technique, using adding/doubling radiative transfer procedures, is derived to accurately determine both weightings, avoiding time consuming Monte Carlo methods. Superposition calculations are made for a variety of geometries and cloud models, and selected results are compared with Monte Carlo calculations. Effective radius retrievals from modeled vertically inhomogeneous liquid water clouds are then made using the standard near-infrared bands, and compared with size estimates based on the proposed weighting functions. Agreement between the two methods is generally within several tenths of a micrometer, much better than expected retrieval accuracy. Though the emphasis is on photon transport in clouds, the derived weightings can be applied to any multiple scattering plane-parallel radiative transfer problem, including arbitrary combinations of cloud, aerosol, and gas layers.
11. Scalable Domain Decomposed Monte Carlo Particle Transport
O'Brien, Matthew Joseph
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.
12. Calculation of radiation therapy dose using all particle Monte Carlo transport
DOEpatents
Chandler, William P.; Hartmann-Siantar, Christine L.; Rathkopf, James A.
1999-01-01
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.
13. Calculation of radiation therapy dose using all particle Monte Carlo transport
DOEpatents
Chandler, W.P.; Hartmann-Siantar, C.L.; Rathkopf, J.A.
1999-02-09
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media. 57 figs.
14. Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport
SciTech Connect
McKinley, M S; Brooks III, E D; Daffin, F
2004-12-13
Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations.
15. Approximation for Horizontal Photon Transport in Cloud Remote Sensing Problems
NASA Technical Reports Server (NTRS)
Plantnick, Steven
1999-01-01
The effect of horizontal photon transport within real-world clouds can be of consequence to remote sensing problems based on plane-parallel cloud models. An analytic approximation for the root-mean-square horizontal displacement of reflected and transmitted photons relative to the incident cloud-top location is derived from random walk theory. The resulting formula is a function of the average number of photon scatterings, and particle asymmetry parameter and single scattering albedo. In turn, the average number of scatterings can be determined from efficient adding/doubling radiative transfer procedures. The approximation is applied to liquid water clouds for typical remote sensing solar spectral bands, involving both conservative and non-conservative scattering. Results compare well with Monte Carlo calculations. Though the emphasis is on horizontal photon transport in terrestrial clouds, the derived approximation is applicable to any multiple scattering plane-parallel radiative transfer problem. The complete horizontal transport probability distribution can be described with an analytic distribution specified by the root-mean-square and average displacement values. However, it is shown empirically that the average displacement can be reasonably inferred from the root-mean-square value. An estimate for the horizontal transport distribution can then be made from the root-mean-square photon displacement alone.
16. Benchmarking of Proton Transport in Super Monte Carlo Simulation Program
Wang, Yongfeng; Li, Gui; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Wu, Yican
2014-06-01
17. Fiber transport of spatially entangled photons
Lffler, W.; Eliel, E. R.; Woerdman, J. P.; Euser, T. G.; Scharrer, M.; Russell, P.
2012-03-01
High-dimensional entangled photons pairs are interesting for quantum information and cryptography: Compared to the well-known 2D polarization case, the stronger non-local quantum correlations could improve noise resistance or security, and the larger amount of information per photon increases the available bandwidth. One implementation is to use entanglement in the spatial degree of freedom of twin photons created by spontaneous parametric down-conversion, which is equivalent to orbital angular momentum entanglement, this has been proven to be an excellent model system. The use of optical fiber technology for distribution of such photons has only very recently been practically demonstrated and is of fundamental and applied interest. It poses a big challenge compared to the established time and frequency domain methods: For spatially entangled photons, fiber transport requires the use of multimode fibers, and mode coupling and intermodal dispersion therein must be minimized not to destroy the spatial quantum correlations. We demonstrate that these shortcomings of conventional multimode fibers can be overcome by using a hollow-core photonic crystal fiber, which follows the paradigm to mimic free-space transport as good as possible, and are able to confirm entanglement of the fiber-transported photons. Fiber transport of spatially entangled photons is largely unexplored yet, therefore we discuss the main complications, the interplay of intermodal dispersion and mode mixing, the influence of external stress and core deformations, and consider the pros and cons of various fiber types.
18. The all particle method: Coupled neutron, photon, electron, charged particle Monte Carlo calculations
SciTech Connect
Cullen, D.E.; Perkins, S.T.; Plechaty, E.F.; Rathkopf, J.A.
1988-06-01
At the present time a Monte Carlo transport computer code is being designed and implemented at Lawrence Livermore National Laboratory to include the transport of: neutrons, photons, electrons and light charged particles as well as the coupling between all species of particles, e.g., photon induced electron emission. Since this code is being designed to handle all particles this approach is called the ''All Particle Method''. The code is designed as a test bed code to include as many different methods as possible (e.g., electron single or multiple scattering) and will be data driven to minimize the number of methods and models ''hard wired'' into the code. This approach will allow changes in the Livermore nuclear and atomic data bases, used to described the interaction and production of particles, to be used to directly control the execution of the program. In addition this approach will allow the code to be used at various levels of complexity to balance computer running time against the accuracy requirements of specific applications. This paper describes the current design philosophy and status of the code. Since the treatment of neutrons and photons used by the All Particle Method code is more or less conventional, emphasis in this paper is placed on the treatment of electron, and to a lesser degree charged particle, transport. An example is presented in order to illustrate an application in which the ability to accurately transport electrons is important. 21 refs., 1 fig.
19. A multiple-source photon beam model and its commissioning process for VMC++ Monte Carlo code
Tillikainen, L.; Siljamki, S.
2008-02-01
The use of Monte Carlo methods in photon beam treatment planning is becoming feasible due to advances in hardware and algorithms. However, a major challenge is the modeling of the radiation produced by individual linear accelerators. Monte Carlo simulation through the accelerator head or a parameterized source model may be used for this purpose. In this work, the latter approach was chosen due to larger flexibility and smaller amount of required information about the accelerator composition. The source model used includes sub-sources for primary photons emerging from target, extra-focal photons, and electron contamination. The free model parameters were derived by minimizing an objective function measuring deviations between pencil-beam-kernel based dose calculations and measurements. The output of the source model was then used as input for the VMC++ code, which was used to transport the particles through the accessory modules and the patient. To verify the procedure, VMC++ calculations were compared to measurements for open, wedged, and irregular MLC-shaped fields for 6MV and 15MV beams. The observed discrepancies were mostly within 2%, 2 mm. This work demonstrates that the developed procedure could, in the future, be used to commission the VMC++ algorithm for clinical use in a hospital.
20. Photon spectra calculation for an Elekta linac beam using experimental scatter measurements and Monte Carlo techniques.
PubMed
Juste, B; Miro, R; Campayo, J M; Diez, S; Verdu, G
2008-01-01
The present work is centered in reconstructing by means of a scatter analysis method the primary beam photon spectrum of a linear accelerator. This technique is based on irradiating the isocenter of a rectangular block made of methacrylate placed at 100 cm distance from surface and measuring scattered particles around the plastic at several specific positions with different scatter angles. The MCNP5 Monte Carlo code has been used to simulate the particles transport of mono-energetic beams to register the scatter measurement after contact the attenuator. Measured ionization values allow calculating the spectrum as the sum of mono-energetic individual energy bins using the Schiff Bremsstrahlung model. The measurements have been made in an Elekta Precise linac using a 6 MeV photon beam. Relative depth and profile dose curves calculated in a water phantom using the reconstructed spectrum agree with experimentally measured dose data to within 3%. PMID:19163410
1. Comparison of Monte Carlo simulations of photon/electron dosimetry in microscale applications.
PubMed
Joneja, O P; Negreanu, C; Stepanek, J; Chawl, R
2003-06-01
It is important to establish reliable calculational tools to plan and analyse representative microdosimetry experiments in the context of microbeam radiation therapy development. In this paper, an attempt has been made to investigate the suitability of the MCNP4C Monte Carlo code to adequately model photon/electron transport over micron distances. The case of a single cylindrical microbeam of 25-micron diameter incident on a water phantom has been simulated in detail with both MCNP4C and the code PSI-GEANT, for different incident photon energies, to get absorbed dose distributions at various depths, with and without electron transport being considered. In addition, dose distributions calculated for a single microbeam with a photon spectrum representative of the European Synchrotron Radiation Facility (ESRF) have been compared. Finally, a large number of cylindrical microbeams (a total of 2601 beams, placed on a 200-micron square pitch, covering an area of 1 cm2) incident on a water phantom have been considered to study cumulative radial dose distributions at different depths. From these distributions, ratios of peak (within the microbeam) to valley (mid-point along the diagonal connecting two microbeams) dose values have been determined. The various comparisons with PSI-GEANT results have shown that MCNP4C, with its high flexibility in terms of its numerous source and geometry description options, variance reduction methods, detailed error analysis, statistical checks and different tally types, can be a valuable tool for the analysis of microbeam experiments. PMID:12956187
2. Discrete Diffusion Monte Carlo for Electron Thermal Transport
Chenhall, Jeffrey; Cao, Duc; Wollaeger, Ryan; Moses, Gregory
2014-10-01
The iSNB (implicit Schurtz Nicolai Busquet electron thermal transport method of Cao et al. is adapted to a Discrete Diffusion Monte Carlo (DDMC) solution method for eventual inclusion in a hybrid IMC-DDMC (Implicit Monte Carlo) method. The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the iSNB-DDMC method will be presented. This work was supported by Sandia National Laboratory - Albuquerque.
3. Commissioning of a Varian Clinac iX 6 MV photon beam using Monte Carlo simulation
Dirgayussa, I. Gde Eka; Yani, Sitti; Rhani, M. Fahdillah; Haryanto, Freddy
2015-09-01
Monte Carlo modelling of a linear accelerator is the first and most important step in Monte Carlo dose calculations in radiotherapy. Monte Carlo is considered today to be the most accurate and detailed calculation method in different fields of medical physics. In this research, we developed a photon beam model for Varian Clinac iX 6 MV equipped with MilleniumMLC120 for dose calculation purposes using BEAMnrc/DOSXYZnrc Monte Carlo system based on the underlying EGSnrc particle transport code. Monte Carlo simulation for this commissioning head LINAC divided in two stages are design head Linac model using BEAMnrc, characterize this model using BEAMDP and analyze the difference between simulation and measurement data using DOSXYZnrc. In the first step, to reduce simulation time, a virtual treatment head LINAC was built in two parts (patient-dependent component and patient-independent component). The incident electron energy varied 6.1 MeV, 6.2 MeV and 6.3 MeV, 6.4 MeV, and 6.6 MeV and the FWHM (full width at half maximum) of source is 1 mm. Phase-space file from the virtual model characterized using BEAMDP. The results of MC calculations using DOSXYZnrc in water phantom are percent depth doses (PDDs) and beam profiles at depths 10 cm were compared with measurements. This process has been completed if the dose difference of measured and calculated relative depth-dose data along the central-axis and dose profile at depths 10 cm is ? 5%. The effect of beam width on percentage depth doses and beam profiles was studied. Results of the virtual model were in close agreement with measurements in incident energy electron 6.4 MeV. Our results showed that photon beam width could be tuned using large field beam profile at the depth of maximum dose. The Monte Carlo model developed in this study accurately represents the Varian Clinac iX with millennium MLC 120 leaf and can be used for reliable patient dose calculations. In this commissioning process, the good criteria of dose difference in PDD and dose profiles were achieve using incident electron energy 6.4 MeV.
4. Review of Monte Carlo modeling of light transport in tissues.
PubMed
Zhu, Caigang; Liu, Quan
2013-05-01
A general survey is provided on the capability of Monte Carlo (MC) modeling in tissue optics while paying special attention to the recent progress in the development of methods for speeding up MC simulations. The principles of MC modeling for the simulation of light transport in tissues, which includes the general procedure of tracking an individual photon packet, common light-tissue interactions that can be simulated, frequently used tissue models, common contact/noncontact illumination and detection setups, and the treatment of time-resolved and frequency-domain optical measurements, are briefly described to help interested readers achieve a quick start. Following that, a variety of methods for speeding up MC simulations, which includes scaling methods, perturbation methods, hybrid methods, variance reduction techniques, parallel computation, and special methods for fluorescence simulations, as well as their respective advantages and disadvantages are discussed. Then the applications of MC methods in tissue optics, laser Doppler flowmetry, photodynamic therapy, optical coherence tomography, and diffuse optical tomography are briefly surveyed. Finally, the potential directions for the future development of the MC method in tissue optics are discussed. PMID:23698318
5. A generic algorithm for Monte Carlo simulation of proton transport
Salvat, Francesc
2013-12-01
A mixed (class II) algorithm for Monte Carlo simulation of the transport of protons, and other heavy charged particles, in matter is presented. The emphasis is on the electromagnetic interactions (elastic and inelastic collisions) which are simulated using strategies similar to those employed in the electron-photon code PENELOPE. Elastic collisions are described in terms of numerical differential cross sections (DCSs) in the center-of-mass frame, calculated from the eikonal approximation with the Dirac-Hartree-Fock-Slater atomic potential. The polar scattering angle is sampled by employing an adaptive numerical algorithm which allows control of interpolation errors. The energy transferred to the recoiling target atoms (nuclear stopping) is consistently described by transformation to the laboratory frame. Inelastic collisions are simulated from DCSs based on the plane-wave Born approximation (PWBA), making use of the Sternheimer-Liljequist model of the generalized oscillator strength, with parameters adjusted to reproduce (1) the electronic stopping power read from the input file, and (2) the total cross sections for impact ionization of inner subshells. The latter were calculated from the PWBA including screening and Coulomb corrections. This approach provides quite a realistic description of the energy-loss distribution in single collisions, and of the emission of X-rays induced by proton impact. The simulation algorithm can be readily modified to include nuclear reactions, when the corresponding cross sections and emission probabilities are available, and bremsstrahlung emission.
6. Efficient photon treatment planning by the use of Swiss Monte Carlo Plan
Fix, M. K.; Manser, P.; Frei, D.; Volken, W.; Mini, R.; Born, E. J.
2007-06-01
Currently photon Monte Carlo treatment planning (MCTP) for a patient stored in the patient database of a treatment planning system (TPS) usually can only be performed using a cumbersome multi-step procedure where many user interactions are needed. Automation is needed for usage in clinical routine. In addition, because of the long computing time in MCTP, optimization of the MC calculations is essential. For these purposes a new GUI-based photon MC environment has been developed resulting in a very flexible framework, namely the Swiss Monte Carlo Plan (SMCP). Appropriate MC transport methods are assigned to different geometric regions by still benefiting from the features included in the TPS. In order to provide a flexible MC environment the MC particle transport has been divided into different parts: source, beam modifiers, and patient. The source part includes: Phase space-source, source models, and full MC transport through the treatment head. The beam modifier part consists of one module for each beam modifier. To simulate the radiation transport through each individual beam modifier, one out of three full MC transport codes can be selected independently. Additionally, for each beam modifier a simple or an exact geometry can be chosen. Thereby, different complexity levels of radiation transport are applied during the simulation. For the patient dose calculation two different MC codes are available. A special plug-in in Eclipse providing all necessary information by means of Dicom streams was used to start the developed MC GUI. The implementation of this framework separates the MC transport from the geometry and the modules pass the particles in memory, hence no files are used as interface. The implementation is realized for 6 and 15 MV beams of a Varian Clinac 2300 C/D. Several applications demonstrate the usefulness of the framework. Apart from applications dealing with the beam modifiers, three patient cases are shown. Thereby, comparisons between MC calculated dose distributions and those calculated by a pencil beam or the AAA algorithm. Interfacing this flexible and efficient MC environment with Eclipse allows a widespread use for all kinds of investigations from timing and benchmarking studies to clinical patient studies. Additionally, it is possible to add modules keeping the system highly flexible and efficient.
7. A hybrid (Monte Carlo/deterministic) approach for multi-dimensional radiation transport
SciTech Connect
Bal, Guillaume; Davis, Anthony B.; Langmore, Ian
2011-08-20
Highlights: {yields} We introduce a variance reduction scheme for Monte Carlo (MC) transport. {yields} The primary application is atmospheric remote sensing. {yields} The technique first solves the adjoint problem using a deterministic solver. {yields} Next, the adjoint solution is used as an importance function for the MC solver. {yields} The adjoint problem is solved quickly since it ignores the volume. - Abstract: A novel hybrid Monte Carlo transport scheme is demonstrated in a scene with solar illumination, scattering and absorbing 2D atmosphere, a textured reflecting mountain, and a small detector located in the sky (mounted on a satellite or a airplane). It uses a deterministic approximation of an adjoint transport solution to reduce variance, computed quickly by ignoring atmospheric interactions. This allows significant variance and computational cost reductions when the atmospheric scattering and absorption coefficient are small. When combined with an atmospheric photon-redirection scheme, significant variance reduction (equivalently acceleration) is achieved in the presence of atmospheric interactions.
8. Shift: A Massively Parallel Monte Carlo Radiation Transport Package
SciTech Connect
Pandya, Tara M; Johnson, Seth R; Davidson, Gregory G; Evans, Thomas M; Hamilton, Steven P
2015-01-01
This paper discusses the massively-parallel Monte Carlo radiation transport package, Shift, developed at Oak Ridge National Laboratory. It reviews the capabilities, implementation, and parallel performance of this code package. Scaling results demonstrate very good strong and weak scaling behavior of the implemented algorithms. Benchmark results from various reactor problems show that Shift results compare well to other contemporary Monte Carlo codes and experimental results.
9. Calculation of photon pulse height distribution using deterministic and Monte Carlo methods
2015-12-01
Radiation transport techniques which are used in radiation detection systems comprise one of two categories namely probabilistic and deterministic. However, probabilistic methods are typically used in pulse height distribution simulation by recreating the behavior of each individual particle, the deterministic approach, which approximates the macroscopic behavior of particles by solution of Boltzmann transport equation, is being developed because of its potential advantages in computational efficiency for complex radiation detection problems. In current work linear transport equation is solved using two methods including collided components of the scalar flux algorithm which is applied by iterating on the scattering source and ANISN deterministic computer code. This approach is presented in one dimension with anisotropic scattering orders up to P8 and angular quadrature orders up to S16. Also, multi-group gamma cross-section library required for this numerical transport simulation is generated in a discrete appropriate form. Finally, photon pulse height distributions are indirectly calculated by deterministic methods that approvingly compare with those from Monte Carlo based codes namely MCNPX and FLUKA.
10. Response of thermoluminescent dosimeters to photons simulated with the Monte Carlo method
Moralles, M.; Guimarães, C. C.; Okuno, E.
2005-06-01
Personal monitors composed of thermoluminescent dosimeters (TLDs) made of natural fluorite (CaF 2:NaCl) and lithium fluoride (Harshaw TLD-100) were exposed to gamma and X rays of different qualities. The GEANT4 radiation transport Monte Carlo toolkit was employed to calculate the energy depth deposition profile in the TLDs. X-ray spectra of the ISO/4037-1 narrow-spectrum series, with peak voltage (kVp) values in the range 20-300 kV, were obtained by simulating a X-ray Philips MG-450 tube associated with the recommended filters. A realistic photon distribution of a 60Co radiotherapy source was taken from results of Monte Carlo simulations found in the literature. Comparison between simulated and experimental results revealed that the attenuation of emitted light in the readout process of the fluorite dosimeter must be taken into account, while this effect is negligible for lithium fluoride. Differences between results obtained by heating the dosimeter from the irradiated side and from the opposite side allowed the determination of the light attenuation coefficient for CaF 2:NaCl (mass proportion 60:40) as 2.2 mm -1.
11. Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce.
PubMed
Pratx, Guillem; Xing, Lei
2011-12-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916
12. Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce
Pratx, Guillem; Xing, Lei
2011-12-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes.
13. Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce
PubMed Central
Pratx, Guillem; Xing, Lei
2011-01-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916
14. Monte Carlo Assessments of Absorbed Doses to the Hands of Radiopharmaceutical Workers Due to Photon Emitters
SciTech Connect
Ilas, Dan; Eckerman, Keith F; Karagiannis, Harriet
2009-01-01
This paper describes the characterization of radiation doses to the hands of nuclear medicine technicians resulting from the handling of radiopharmaceuticals. Radiation monitoring using ring dosimeters indicates that finger dosimeters that are used to show compliance with applicable regulations may overestimate or underestimate radiation doses to the skin depending on the nature of the particular procedure and the radionuclide being handled. To better understand the parameters governing the absorbed dose distributions, a detailed model of the hands was created and used in Monte Carlo simulations of selected nuclear medicine procedures. Simulations of realistic configurations typical for workers handling radiopharmaceuticals were performedfor a range of energies of the source photons. The lack of charged-particle equilibrium necessitated full photon-electron coupled transport calculations. The results show that the dose to different regions of the fingers can differ substantially from dosimeter readings when dosimeters are located at the base of the finger. We tried to identify consistent patterns that relate the actual dose to the dosimeter readings. These patterns depend on the specific work conditions and can be used to better assess the absorbed dose to different regions of the exposed skin.
15. Specific absorbed fractions of electrons and photons for Rad-HUMAN phantom using Monte Carlo method
Wang, Wen; Cheng, Meng-Yun; Long, Peng-Cheng; Hu, Li-Qin
2015-07-01
The specific absorbed fractions (SAF) for self- and cross-irradiation are effective tools for the internal dose estimation of inhalation and ingestion intakes of radionuclides. A set of SAFs of photons and electrons were calculated using the Rad-HUMAN phantom, which is a computational voxel phantom of a Chinese adult female that was created using the color photographic image of the Chinese Visible Human (CVH) data set by the FDS Team. The model can represent most Chinese adult female anatomical characteristics and can be taken as an individual phantom to investigate the difference of internal dose with Caucasians. In this study, the emission of mono-energetic photons and electrons of 10 keV to 4 MeV energy were calculated using the Monte Carlo particle transport calculation code MCNP. Results were compared with the values from ICRP reference and ORNL models. The results showed that SAF from the Rad-HUMAN have similar trends but are larger than those from the other two models. The differences were due to the racial and anatomical differences in organ mass and inter-organ distance. The SAFs based on the Rad-HUMAN phantom provide an accurate and reliable data for internal radiation dose calculations for Chinese females. Supported by Strategic Priority Research Program of Chinese Academy of Sciences (XDA03040000), National Natural Science Foundation of China (910266004, 11305205, 11305203) and National Special Program for ITER (2014GB112001)
16. Performance analysis of the Monte Carlo code MCNP4A for photon-based radiotherapy applications
SciTech Connect
DeMarco, J.J.; Solberg, T.D.; Wallace, R.E.; Smathers, J.B.
1995-12-31
The Los Alamos code MCNP4A (Monte Carlo M-Particle version 4A) is currently used to simulate a variety of problems ranging from nuclear reactor analysis to boron neutron capture therapy. This study is designed to evaluate MCNP4A as the dose calculation system for photon-based radiotherapy applications. A graphical user interface (MCNP Radiation Therapy) has been developed which automatically sets up the geometry and photon source requirements for three-dimensional simulations using Computed Tomography (CT) data. Preliminary results suggest the code is capable of calculating satisfactory dose distributions in a variety of simulated homogeneous and heterogeneous phantoms. The major drawback for this dosimetry system is the amount of time to obtain a statistically significant answer. MCNPRT allows the user to analyze the performance of MCNP4A as a function of material, geometry resolution and MCNP4A photon and electron physics parameters. A typical simulation geometry consists of a 10 MV photon point source incident on a 15 x 15 x 15 cm{sup 3} phantom composed of water voxels ranging in size from 10 x 10 x 10 mm{sup 3} to 2 x 2 x 2 mm{sup 3}. As the voxel size is decreased, a larger percentage of time is spent tracking photons through the voxelized geometry as opposed to the secondary electrons. A PRPR Patch file is under development that will optimize photon transport within the simulation phantom specifically for radiotherapy applications. MCNP4A also supports parallel processing capabilities via the Parallel Virtual Machine (PVM) message passing system. A dedicated network of five SUN SPARC2 processors produced a wall-clock speedup of 4.4 based on a simulation phantom containing 5 x 5 x 5 mm{sup 3} water voxels. The code was also tested on the 80 node IBM RS/6000 cluster at the Maui High Performance Computing Center (NHPCC). A non-dedicated system of 75 processors produces a wall clock speedup of 29 relative to one SUN SPARC2 computer.
17. Monte Carlo Simulation of Light Transport in Tissue, Beta Version
Energy Science and Technology Software Center (ESTSC)
2003-12-09
Understanding light-tissue interaction is fundamental in the field of Biomedical Optics. It has important implications for both therapeutic and diagnostic technologies. In this program, light transport in scattering tissue is modeled by absorption and scattering events as each photon travels through the tissue. the path of each photon is determined statistically by calculating probabilities of scattering and absorption. Other meausured quantities are total reflected light, total transmitted light, and total heat absorbed.
18. Monte Carlo simulation and experimental measurement of a nonspectroscopic radiation portal monitor for photon detection efficiencies of internally deposited radionuclides
Carey, Matthew Glen
Particle transport of radionuclide photons using the Monte Carlo N-Particle computer code can be used to determine a portal monitor's photon detection efficiency, in units of counts per photon, for internally deposited radionuclides. Good agreement has been found with experimental results for radionuclides that emit higher energy photons, such as Cs-137 and Co-60. Detection efficiency for radionuclides that emit lower energy photons, such as Am-241, greatly depend on the effective discriminator energy level of the portal monitor as well as any attenuating material between the source and detectors. This evaluation uses a chi-square approach to determine the best fit discriminator level of a non-spectroscopic portal monitor when the effective discriminator level, in units of energy, is not known. Internal detection efficiencies were evaluated experimentally using an anthropomorphic phantom with NIST traceable sources at various internal locations, and by simulation using MCNP5. The results of this research find that MCNP5 can be an effective tool for simulation of photon detection efficiencies, given a known discriminator level, for internally and externally deposited radionuclides. In addition, MCNP5 can be used for bounding personnel doses from either internally or externally deposited mixtures of radionuclides.
19. Efficient, Automated Monte Carlo Methods for Radiation Transport
PubMed Central
Kong, Rong; Ambrose, Martin; Spanier, Jerome
2012-01-01
Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k + 1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed. PMID:23226872
20. Efficient, automated Monte Carlo methods for radiation transport
SciTech Connect
Kong Rong; Ambrose, Martin; Spanier, Jerome
2008-11-20
Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k+1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed.
1. Monte Carlo simulation of photon densities inside the dermis in LLLT (low level laser therapy)
Parvin, Parviz; Eftekharnoori, Somayeh; Dehghanpour, Hamid Reza
2009-09-01
In this work, the photon distribution of He:Ne laser within dermis tissue is studied. The dermis as a highly scattering media was irradiated by a low power laser. The photon densities as well as the corresponding isothermal contours were obtained by two different numerical methods, i.e., Lambert-Beer and Welch. The results were compared to that of Monte Carlo subsequently.
2. Accurate and efficient Monte Carlo solutions to the radiative transport equation in the spatial frequency domain
PubMed Central
2012-01-01
We present an approach to solving the radiative transport equation (RTE) for layered media in the spatial frequency domain (SFD) using Monte Carlo (MC) simulations. This is done by obtaining a complex photon weight from analysis of the Fourier transform of the RTE. We also develop a modified shortcut method that enables a single MC simulation to efficiently provide RTE solutions in the SFD for any number of spatial frequencies. We provide comparisons between the modified shortcut method and conventional discrete transform methods for SFD reflectance. Further results for oblique illumination illustrate the potential diagnostic utility of the SFD phase-shifts for analysis of layered media. PMID:21685989
3. A three-dimensional Monte Carlo calculation of the photon initiated showers and Kiel result
NASA Technical Reports Server (NTRS)
1985-01-01
The Kiel experimental results indicate an existence of the ultra high-energy gamma-rays coming from Cyg. X-3. However the result indicates that the number of the muons included in the photon initiated shower is the same as the number included in the proton initiated showers. According to our Monte Carlo calculation as shown in the graph of underpart, the number of muons included in the photon initiated showers should be less than 1/15 of the photon's. The previous simulation was made under one dimensional approximation. This time the result of three dimensional calculation is reported.
4. Monte Carlo generator photon jets used for luminosity at e+e- colliders
Fedotovich, G. V.; Kuraev, E. A.; Sibidanov, A. L.
2010-06-01
A Monte-Carlo Generator Photon Jets (MCGPJ) to simulate Bhabha scattering as well as production of two charged muons and two photons events is discussed. The theoretical precision of the cross sections with radiative corrections (RC) is estimated to be smaller than 0.2%. The Next Leading Order (NLO) radiative corrections proportional to ? are treated exactly, whereas the all logarithmically enhanced contributions, related to photon jets emitted in the collinear region, are taken into account in frame of the Structure Function approach. Numerous tests of the MCGPJ as well as a detailed comparison with other MC generators are presented.
5. Monte Carlo Modeling of Photon Interrogation Methods for Characterization of Special Nuclear Material
SciTech Connect
Pozzi, Sara A; Downar, Thomas J; Padovani, Enrico; Clarke, Shaun D
2006-01-01
This work illustrates a methodology based on photon interrogation and coincidence counting for determining the characteristics of fissile material. The feasibility of the proposed methods was demonstrated using a Monte Carlo code system to simulate the full statistics of the neutron and photon field generated by the photon interrogation of fissile and non-fissile materials. Time correlation functions between detectors were simulated for photon beam-on and photon beam-off operation. In the latter case, the correlation signal is obtained via delayed neutrons from photofission, which induce further fission chains in the nuclear material. An analysis methodology was demonstrated based on features selected from the simulated correlation functions and on the use of artificial neural networks. We show that the methodology can reliably differentiate between highly enriched uranium and plutonium. Furthermore, the mass of the material can be determined with a relative error of about 12%. Keywords: MCNP, MCNP-PoliMi, Artificial neural network, Correlation measurement, Photofission
6. Monte Carlo based beam model using a photon MLC for modulated electron radiotherapy
SciTech Connect
Henzen, D. Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Vetterli, D.; Chatelain, C.; Fix, M. K.; Neuenschwander, H.; Stampanoni, M. F. M.
2014-02-15
Purpose: Modulated electron radiotherapy (MERT) promises sparing of organs at risk for certain tumor sites. Any implementation of MERT treatment planning requires an accurate beam model. The aim of this work is the development of a beam model which reconstructs electron fields shaped using the Millennium photon multileaf collimator (MLC) (Varian Medical Systems, Inc., Palo Alto, CA) for a Varian linear accelerator (linac). Methods: This beam model is divided into an analytical part (two photon and two electron sources) and a Monte Carlo (MC) transport through the MLC. For dose calculation purposes the beam model has been coupled with a macro MC dose calculation algorithm. The commissioning process requires a set of measurements and precalculated MC input. The beam model has been commissioned at a source to surface distance of 70 cm for a Clinac 23EX (Varian Medical Systems, Inc., Palo Alto, CA) and a TrueBeam linac (Varian Medical Systems, Inc., Palo Alto, CA). For validation purposes, measured and calculated depth dose curves and dose profiles are compared for four different MLC shaped electron fields and all available energies. Furthermore, a measured two-dimensional dose distribution for patched segments consisting of three 18 MeV segments, three 12 MeV segments, and a 9 MeV segment is compared with corresponding dose calculations. Finally, measured and calculated two-dimensional dose distributions are compared for a circular segment encompassed with a C-shaped segment. Results: For 15 34, 5 5, and 2 2 cm{sup 2} fields differences between water phantom measurements and calculations using the beam model coupled with the macro MC dose calculation algorithm are generally within 2% of the maximal dose value or 2 mm distance to agreement (DTA) for all electron beam energies. For a more complex MLC pattern, differences between measurements and calculations are generally within 3% of the maximal dose value or 3 mm DTA for all electron beam energies. For the two-dimensional dose comparisons, the differences between calculations and measurements are generally within 2% of the maximal dose value or 2 mm DTA. Conclusions : The results of the dose comparisons suggest that the developed beam model is suitable to accurately reconstruct photon MLC shaped electron beams for a Clinac 23EX and a TrueBeam linac. Hence, in future work the beam model will be utilized to investigate the possibilities of MERT using the photon MLC to shape electron beams.
7. Monte Carlo simulations of charge transport in heterogeneous organic semiconductors
Aung, Pyie Phyo; Khanal, Kiran; Luettmer-Strathmann, Jutta
2015-03-01
The efficiency of organic solar cells depends on the morphology and electronic properties of the active layer. Research teams have been experimenting with different conducting materials to achieve more efficient solar panels. In this work, we perform Monte Carlo simulations to study charge transport in heterogeneous materials. We have developed a coarse-grained lattice model of polymeric photovoltaics and use it to generate active layers with ordered and disordered regions. We determine carrier mobilities for a range of conditions to investigate the effect of the morphology on charge transport.
8. Neutron streaming Monte Carlo radiation transport code MORSE-CG
SciTech Connect
Halley, A.M.; Miller, W.H.
1986-11-01
Calculations have been performed using the Monte Carlo code, MORSE-CG, to determine the neutron streaming through various straight and stepped gaps between radiation shield sectors in the conceptual tokamak fusion power plant design STARFIRE. This design calls for ''pie-shaped'' radiation shields with gaps between segments. It is apparent that some type of offset, or stepped gap, configuration will be necessary to reduce neutron streaming through these gaps. To evaluate this streaming problem, a MORSE-to-MORSE coupling technique was used, consisting of two separate transport calculations, which together defined the entire transport problem. The results define the effectiveness of various gap configurations to eliminate radiation streaming.
9. Comparing gold nano-particle enhanced radiotherapy with protons, megavoltage photons and kilovoltage photons: a Monte Carlo simulation
Lin, Yuting; McMahon, Stephen J.; Scarpelli, Matthew; Paganetti, Harald; Schuemann, Jan
2014-12-01
Gold nanoparticles (GNPs) have shown potential to be used as a radiosensitizer for radiation therapy. Despite extensive research activity to study GNP radiosensitization using photon beams, only a few studies have been carried out using proton beams. In this work Monte Carlo simulations were used to assess the dose enhancement of GNPs for proton therapy. The enhancement effect was compared between a clinical proton spectrum, a clinical 6?MV photon spectrum, and a kilovoltage photon source similar to those used in many radiobiology lab settings. We showed that the mechanism by which GNPs can lead to dose enhancements in radiation therapy differs when comparing photon and proton radiation. The GNP dose enhancement using protons can be up to 14 and is independent of proton energy, while the dose enhancement is highly dependent on the photon energy used. For the same amount of energy absorbed in the GNP, interactions with protons, kVp photons and MV photons produce similar doses within several nanometers of the GNP surface, and differences are below 15% for the first 10?nm. However, secondary electrons produced by kilovoltage photons have the longest range in water as compared to protons and MV photons, e.g. they cause a dose enhancement 20 times higher than the one caused by protons 10??m away from the GNP surface. We conclude that GNPs have the potential to enhance radiation therapy depending on the type of radiation source. Proton therapy can be enhanced significantly only if the GNPs are in close proximity to the biological target.
10. Monte Carlo radiation transport: A revolution in science
SciTech Connect
Hendricks, J.
1993-04-01
When Enrico Fermi, Stan Ulam, Nicholas Metropolis, John von Neuman, and Robert Richtmyer invented the Monte Carlo method fifty years ago, little could they imagine the far-flung consequences, the international applications, and the revolution in science epitomized by their abstract mathematical method. The Monte Carlo method is used in a wide variety of fields to solve exact computational models approximately by statistical sampling. It is an alternative to traditional physics modeling methods which solve approximate computational models exactly by deterministic methods. Modern computers and improved methods, such as variance reduction, have enhanced the method to the point of enabling a true predictive capability in areas such as radiation or particle transport. This predictive capability has contributed to a radical change in the way science is done: design and understanding come from computations built upon experiments rather than being limited to experiments, and the computer codes doing the computations have become the repository for physics knowledge. The MCNP Monte Carlo computer code effort at Los Alamos is an example of this revolution. Physicians unfamiliar with physics details can design cancer treatments using physics buried in the MCNP computer code. Hazardous environments and hypothetical accidents can be explored. Many other fields, from underground oil well exploration to aerospace, from physics research to energy production, from safety to bulk materials processing, benefit from MCNP, the Monte Carlo method, and the revolution in science.
11. SIMIND Monte Carlo simulation of a single photon emission CT
PubMed Central
Bahreyni Toossi, M. T.; Islamian, J. Pirayesh; Momennezhad, M.; Ljungberg, M.; Naseri, S. H.
2010-01-01
In this study, we simulated a Siemens E.CAM SPECT system using SIMIND Monte Carlo program to acquire its experimental characterization in terms of energy resolution, sensitivity, spatial resolution and imaging of phantoms using 99mTc. The experimental and simulation data for SPECT imaging was acquired from a point source and Jaszczak phantom. Verification of the simulation was done by comparing two sets of images and related data obtained from the actual and simulated systems. Image quality was assessed by comparing image contrast and resolution. Simulated and measured energy spectra (with or without a collimator) and spatial resolution from point sources in air were compared. The resulted energy spectra present similar peaks for the gamma energy of 99mTc at 140 KeV. FWHM for the simulation calculated to 14.01 KeV and 13.80 KeV for experimental data, corresponding to energy resolution of 10.01 and 9.86% compared to defined 9.9% for both systems, respectively. Sensitivities of the real and virtual gamma cameras were calculated to 85.11 and 85.39 cps/MBq, respectively. The energy spectra of both simulated and real gamma cameras were matched. Images obtained from Jaszczak phantom, experimentally and by simulation, showed similarity in contrast and resolution. SIMIND Monte Carlo could successfully simulate the Siemens E.CAM gamma camera. The results validate the use of the simulated system for further investigation, including modification, planning, and developing a SPECT system to improve the quality of images. PMID:20177569
12. Modeling photon transport in transabdominal fetal oximetry
Jacques, Steven L.; Ramanujam, Nirmala; Vishnoi, Gargi; Choe, Regine; Chance, Britton
2000-07-01
The possibility of optical oximetry of the blood in the fetal brain measured across the maternal abdomen just prior to birth is under investigated. Such measurements could detect fetal distress prior to birth and aid in the clinical decision regarding Cesarean section. This paper uses a perturbation method to model photon transport through a 8- cm-diam fetal brain located at a constant 2.5 cm below a curved maternal abdominal surface with an air/tissue boundary. In the simulation, a near-infrared light source delivers light to the abdomen and a detector is positioned up to 10 cm from the source along the arc of the abdominal surface. The light transport [W/cm2 fluence rate per W incident power] collected at the 10 cm position is Tm equals 2.2 X 10-6 cm-2 if the fetal brain has the same optical properties as the mother and Tf equals 1.0 X 10MIN6 cm-2 for an optically perturbing fetal brain with typical brain optical properties. The perturbation P equals (Tf - Tm)/Tm is -53% due to the fetal brain. The model illustrates the challenge and feasibility of transabdominal oximetry of the fetal brain.
13. Monte Carlo calculation of dose rate conversion factors for external exposure to photon emitters in soil.
PubMed
Clouvas, A; Xanthos, S; Antonopoulos-Domis, M; Silva, J
2000-03-01
The dose rate conversion factors D(CF) (absorbed dose rate in air per unit activity per unit of soil mass, nGy h(-1) per Bq kg(-1)) are calculated 1 m above ground for photon emitters of natural radionuclides uniformly distributed in the soil. Three Monte Carlo codes are used: 1) The MCNP code of Los Alamos; 2) The GEANT code of CERN; and 3) a Monte Carlo code developed in the Nuclear Technology Laboratory of the Aristotle University of Thessaloniki. The accuracy of the Monte Carlo results is tested by the comparison of the unscattered flux obtained by the three Monte Carlo codes with an independent straightforward calculation. All codes and particularly the MCNP calculate accurately the absorbed dose rate in air due to the unscattered radiation. For the total radiation (unscattered plus scattered) the D(CF) values calculated from the three codes are in very good agreement between them. The comparison between these results and the results deduced previously by other authors indicates a good agreement (less than 15% of difference) for photon energies above 1,500 keV. Antithetically, the agreement is not as good (difference of 20-30%) for the low energy photons. PMID:10688452
14. Neutron and photon transport in seagoing cargo containers
SciTech Connect
Pruet, J.; Descalle, M.-A.; Hall, J.; Pohl, B.; Prussin, S.G.
2005-05-01
Factors affecting sensing of small quantities of fissionable material in large seagoing cargo containers by neutron interrogation and detection of {beta}-delayed photons are explored. The propagation of variable-energy neutrons in cargos, subsequent fission of hidden nuclear material and production of the {beta}-delayed photons, and the propagation of these photons to an external detector are considered explicitly. Detailed results of Monte Carlo simulations of these stages in representative cargos are presented. Analytical models are developed both as a basis for a quantitative understanding of the interrogation process and as a tool to allow ready extrapolation of our results to cases not specifically considered here.
15. Monte Carlo simulation of secondary radiation exposure from high-energy photon therapy using an anthropomorphic phantom.
PubMed
Frankl, Matthias; Macin-Juan, Rafael
2016-03-01
The development of intensity-modulated radiotherapy treatments delivering large amounts of monitor units (MUs) recently raised concern about higher risks for secondary malignancies. In this study, optimised combinations of several variance reduction techniques (VRTs) have been implemented in order to achieve a high precision in Monte Carlo (MC) radiation transport simulations and the calculation of in- and out-of-field photon and neutron dose-equivalent distributions in an anthropomorphic phantom using MCNPX, v.2.7. The computer model included a Varian Clinac 2100C treatment head and a high-resolution head phantom. By means of the applied VRTs, a relative uncertainty for the photon dose-equivalent distribution of <1 % in-field and 15 % in average over the rest of the phantom could be obtained. Neutron dose equivalent, caused by photonuclear reactions in the linear accelerator components at photon energies of approximately >8 MeV, has been calculated. Relative uncertainty, calculated for each voxel, could be kept below 5 % in average over all voxels of the phantom. Thus, a very detailed neutron dose distribution could be obtained. The achieved precision now allows a far better estimation of both photon and especially neutron doses out-of-field, where neutrons can become the predominant component of secondary radiation. PMID:26311702
16. A Monte Carlo method for calculating the energy response of plastic scintillators to polarized photons below 100 keV
Mizuno, T.; Kanai, Y.; Kataoka, J.; Kiss, M.; Kurita, K.; Pearce, M.; Tajima, H.; Takahashi, H.; Tanaka, T.; Ueno, M.; Umeki, Y.; Yoshida, H.; Arimoto, M.; Axelsson, M.; Marini Bettolo, C.; Bogaert, G.; Chen, P.; Craig, W.; Fukazawa, Y.; Gunji, S.; Kamae, T.; Katsuta, J.; Kawai, N.; Kishimoto, S.; Klamra, W.; Larsson, S.; Madejski, G.; Ng, J. S. T.; Ryde, F.; Rydstrm, S.; Takahashi, T.; Thurston, T. S.; Varner, G.
2009-03-01
The energy response of plastic scintillators (Eljen Technology EJ-204) to polarized soft gamma-ray photons below 100 keV has been studied, primarily for the balloon-borne polarimeter, PoGOLite. The response calculation includes quenching effects due to low-energy recoil electrons and the position dependence of the light collection efficiency in a 20 cm long scintillator rod. The broadening of the pulse-height spectrum, presumably caused by light transportation processes inside the scintillator, as well as the generation and multiplication of photoelectrons in the photomultiplier tube, were studied experimentally and have also been taken into account. A Monte Carlo simulation based on the Geant4 toolkit was used to model photon interactions in the scintillators. When using the polarized Compton/Rayleigh scattering processes previously corrected by the authors, scintillator spectra and angular distributions of scattered polarized photons could clearly be reproduced, in agreement with the results obtained at a synchrotron beam test conducted at the KEK Photon Factory. Our simulation successfully reproduces the modulation factor, defined as the ratio of the amplitude to the mean of the distribution of the azimuthal scattering angles, within 5% (relative). Although primarily developed for the PoGOLite mission, the method presented here is also relevant for other missions aiming to measure polarization from astronomical objects using plastic scintillator scatterers.
17. Low-energy photons in high-energy photon fields--Monte Carlo generated spectra and a new descriptive parameter.
PubMed
Chofor, Ndimofor; Harder, Dietrich; Willborn, Kay; Rhmann, Antje; Poppe, Bjrn
2011-09-01
The varying low-energy contribution to the photon spectra at points within and around radiotherapy photon fields is associated with variations in the responses of non-water equivalent dosimeters and in the water-to-material dose conversion factors for tissues such as the red bone marrow. In addition, the presence of low-energy photons in the photon spectrum enhances the RBE in general and in particular for the induction of second malignancies. The present study discusses the general rules valid for the low-energy spectral component of radiotherapeutic photon beams at points within and in the periphery of the treatment field, taking as an example the Siemens Primus linear accelerator at 6 MV and 15 MV. The photon spectra at these points and their typical variations due to the target system, attenuation, single and multiple Compton scattering, are described by the Monte Carlo method, using the code BEAMnrc/EGSnrc. A survey of the role of low energy photons in the spectra within and around radiotherapy fields is presented. In addition to the spectra, some data compression has proven useful to support the overview of the behaviour of the low-energy component. A characteristic indicator of the presence of low-energy photons is the dose fraction attributable to photons with energies not exceeding 200 keV, termed P(D)(200 keV). Its values are calculated for different depths and lateral positions within a water phantom. For a pencil beam of 6 or 15 MV primary photons in water, the radial distribution of P(D)(200 keV) is bellshaped, with a wide-ranging exponential tail of half value 6 to 7 cm. The P(D)(200 keV) value obtained on the central axis of a photon field shows an approximately proportional increase with field size. Out-of-field P(D)(200 keV) values are up to an order of magnitude higher than on the central axis for the same irradiation depth. The 2D pattern of P(D)(200 keV) for a radiotherapy field visualizes the regions, e.g. at the field margin, where changes of detector responses and dose conversion factors, as well as increases of the RBE have to be anticipated. Parameter P(D)(200 keV) can also be used as a guidance supporting the selection of a calibration geometry suitable for radiation dosimeters to be used in small radiation fields. PMID:21530198
18. Current status of the PSG Monte Carlo neutron transport code
SciTech Connect
Leppaenen, J.
2006-07-01
PSG is a new Monte Carlo neutron transport code, developed at the Technical Research Centre of Finland (VTT). The code is mainly intended for fuel assembly-level reactor physics calculations, such as group constant generation for deterministic reactor simulator codes. This paper presents the current status of the project and the essential capabilities of the code. Although the main application of PSG is in lattice calculations, the geometry is not restricted in two dimensions. This paper presents the validation of PSG against the experimental results of the three-dimensional MOX fuelled VENUS-2 reactor dosimetry benchmark. (authors)
19. Adaptively Learning an Importance Function Using Transport Constrained Monte Carlo
SciTech Connect
Booth, T.E.
1998-06-22
It is well known that a Monte Carlo estimate can be obtained with zero-variance if an exact importance function for the estimate is known. There are many ways that one might iteratively seek to obtain an ever more exact importance function. This paper describes a method that has obtained ever more exact importance functions that empirically produce an error that is dropping exponentially with computer time. The method described herein constrains the importance function to satisfy the (adjoint) Boltzmann transport equation. This constraint is provided by using the known form of the solution, usually referred to as the Case eigenfunction solution.
20. A high-order photon Monte Carlo method for radiative transfer in direct numerical simulation
SciTech Connect
Wu, Y.; Modest, M.F.; Haworth, D.C. . E-mail: [email protected]
2007-05-01
A high-order photon Monte Carlo method is developed to solve the radiative transfer equation. The statistical and discretization errors of the computed radiative heat flux and radiation source term are isolated and quantified. Up to sixth-order spatial accuracy is demonstrated for the radiative heat flux, and up to fourth-order accuracy for the radiation source term. This demonstrates the compatibility of the method with high-fidelity direct numerical simulation (DNS) for chemically reacting flows. The method is applied to address radiative heat transfer in a one-dimensional laminar premixed flame and a statistically one-dimensional turbulent premixed flame. Modifications of the flame structure with radiation are noted in both cases, and the effects of turbulence/radiation interactions on the local reaction zone structure are revealed for the turbulent flame. Computational issues in using a photon Monte Carlo method for DNS of turbulent reacting flows are discussed.
1. A Monte Carlo study on neutron and electron contamination of an unflattened 18-MV photon beam.
PubMed
Mesbahi, Asghar
2009-01-01
Recent studies on flattening filter (FF) free beams have shown increased dose rate and less out-of-field dose for unflattened photon beams. On the other hand, changes in contamination electrons and neutron spectra produced through photon (E>10 MV) interactions with linac components have not been completely studied for FF free beams. The objective of this study was to investigate the effect of removing FF on contamination electron and neutron spectra for an 18-MV photon beam using Monte Carlo (MC) method. The 18-MV photon beam of Elekta SL-25 linac was simulated using MCNPX MC code. The photon, electron and neutron spectra at a distance of 100 cm from target and on the central axis of beam were scored for 10 x 10 and 30 x 30 cm(2) fields. Our results showed increase in contamination electron fluence (normalized to photon fluence) up to 1.6 times for FF free beam, which causes more skin dose for patients. Neuron fluence reduction of 54% was observed for unflattened beams. Our study confirmed the previous measurement results, which showed neutron dose reduction for unflattened beams. This feature can lead to less neutron dose for patients treated with unflattened high-energy photon beams. PMID:18760613
2. Optimization of Monte Carlo transport simulations in stochastic media
SciTech Connect
Liang, C.; Ji, W.
2012-07-01
This paper presents an accurate and efficient approach to optimize radiation transport simulations in a stochastic medium of high heterogeneity, like the Very High Temperature Gas-cooled Reactor (VHTR) configurations packed with TRISO fuel particles. Based on a fast nearest neighbor search algorithm, a modified fast Random Sequential Addition (RSA) method is first developed to speed up the generation of the stochastic media systems packed with both mono-sized and poly-sized spheres. A fast neutron tracking method is then developed to optimize the next sphere boundary search in the radiation transport procedure. In order to investigate their accuracy and efficiency, the developed sphere packing and neutron tracking methods are implemented into an in-house continuous energy Monte Carlo code to solve an eigenvalue problem in VHTR unit cells. Comparison with the MCNP benchmark calculations for the same problem indicates that the new methods show considerably higher computational efficiency. (authors)
3. A Monte Carlo simulation of ion transport at finite temperatures
Ristivojevic, Zoran; Petrović, Zoran Lj
2012-06-01
We have developed a Monte Carlo simulation for ion transport in hot background gases, which is an alternative way of solving the corresponding Boltzmann equation that determines the distribution function of ions. We consider the limit of low ion densities when the distribution function of the background gas remains unchanged due to collision with ions. Special attention has been paid to properly treating the thermal motion of the host gas particles and their influence on ions, which is very important at low electric fields, when the mean ion energy is comparable to the thermal energy of the host gas. We found the conditional probability distribution of gas velocities that correspond to an ion of specific velocity which collides with a gas particle. Also, we have derived exact analytical formulae for piecewise calculation of the collision frequency integrals. We address the cases when the background gas is monocomponent and when it is a mixture of different gases. The techniques described here are required for Monte Carlo simulations of ion transport and for hybrid models of non-equilibrium plasmas. The range of energies where it is necessary to apply the technique has been defined. The results we obtained are in excellent agreement with the existing ones obtained by complementary methods. Having verified our algorithm, we were able to produce calculations for Ar+ ions in Ar and propose them as a new benchmark for thermal effects. The developed method is widely applicable for solving the Boltzmann equation that appears in many different contexts in physics.
4. Characterization of a novel micro-irradiator using Monte Carlo radiation transport simulations
Rodriguez, Manuel; Jeraj, Robert
2008-06-01
Small animals are highly valuable resources for radiobiology research. While rodents have been widely used for decades, zebrafish embryos have recently become a very popular research model. However, unlike rodents, zebrafish embryos lack appropriate irradiation tools and methodologies. Therefore, the main purpose of this work is to use Monte Carlo radiation transport simulations to characterize dosimetric parameters, determine dosimetric sensitivity and help with the design of a new micro-irradiator capable of delivering irradiation fields as small as 1.0 mm in diameter. The system is based on a miniature x-ray source enclosed in a brass collimator with 3 cm diameter and 3 cm length. A pinhole of 1.0 mm diameter along the central axis of the collimator is used to produce a narrow photon beam. The MCNP5, Monte Carlo code, is used to study the beam energy spectrum, percentage depth dose curves, penumbra and effective field size, dose rate and radiation levels at 50 cm from the source. The results obtained from Monte Carlo simulations show that a beam produced by the miniature x-ray and the collimator system is adequate to totally or partially irradiate zebrafish embryos, cell cultures and other small specimens used in radiobiology research.
5. A multiple source model for 6 MV photon beam dose calculations using Monte Carlo.
PubMed
Fix, M K; Stampanoni, M; Manser, P; Born, E J; Mini, R; Regsegger, P
2001-05-01
A multiple source model (MSM) for the 6 MV beam of a Varian Clinac 2300 C/D was developed by simulating radiation transport through the accelerator head for a set of square fields using the GEANT Monte Carlo (MC) code. The corresponding phase space (PS) data enabled the characterization of 12 sources representing the main components of the beam defining system. By parametrizing the source characteristics and by evaluating the dependence of the parameters on field size, it was possible to extend the validity of the model to arbitrary rectangular fields which include the central 3 x 3 cm2 field without additional precalculated PS data. Finally, a sampling procedure was developed in order to reproduce the PS data. To validate the MSM, the fluence, energy fluence and mean energy distributions determined from the original and the reproduced PS data were compared and showed very good agreement. In addition, the MC calculated primary energy spectrum was verified by an energy spectrum derived from transmission measurements. Comparisons of MC calculated depth dose curves and profiles, using original and PS data reproduced by the MSM, agree within 1% and 1 mm. Deviations from measured dose distributions are within 1.5% and 1 mm. However, the real beam leads to some larger deviations outside the geometrical beam area for large fields. Calculated output factors in 10 cm water depth agree within 1.5% with experimentally determined data. In conclusion, the MSM produces accurate PS data for MC photon dose calculations for the rectangular fields specified. PMID:11384062
6. A deterministic computational model for the two dimensional electron and photon transport
Badavi, Francis F.; Nealy, John E.
2014-12-01
A deterministic (non-statistical) two dimensional (2D) computational model describing the transport of electron and photon typical of space radiation environment in various shield media is described. The 2D formalism is casted into a code which is an extension of a previously developed one dimensional (1D) deterministic electron and photon transport code. The goal of both 1D and 2D codes is to satisfy engineering design applications (i.e. rapid analysis) while maintaining an accurate physics based representation of electron and photon transport in space environment. Both 1D and 2D transport codes have utilized established theoretical representations to describe the relevant collisional and radiative interactions and transport processes. In the 2D version, the shield material specifications are made more general as having the pertinent cross sections. In the 2D model, the specification of the computational field is in terms of a distance of traverse z along an axial direction as well as a variable distribution of deflection (i.e. polar) angles ? where -?/2transport formalism, a combined mean-free-path and average trajectory approach is used. For candidate shielding materials, using the trapped electron radiation environments at low Earth orbit (LEO), geosynchronous orbit (GEO) and Jupiter moon Europa, verification of the 2D formalism vs. 1D and an existing Monte Carlo code are presented.
7. Dissipationless electron transport in photon-dressed nanostructures.
PubMed
Kibis, O V
2011-09-01
It is shown that the electron coupling to photons in field-dressed nanostructures can result in the ground electron-photon state with a nonzero electric current. Since the current is associated with the ground state, it flows without the Joule heating of the nanostructure and is nondissipative. Such a dissipationless electron transport can be realized in strongly coupled electron-photon systems with the broken time-reversal symmetry--particularly, in quantum rings and chiral nanostructures dressed by circularly polarized photons. PMID:21981519
8. Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method
Wollaeger, Ryan T.; van Rossum, Daniel R.; Graziani, Carlo; Couch, Sean M.; Jordan, George C., IV; Lamb, Donald Q.; Moses, Gregory A.
2013-12-01
We explore Implicit Monte Carlo (IMC) and discrete diffusion Monte Carlo (DDMC) for radiation transport in high-velocity outflows with structured opacity. The IMC method is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking MC particles through optically thick materials. DDMC accelerates IMC in diffusive domains. Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally gray DDMC method. We rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. This formulation includes an analysis that yields an additional factor in the standard IMC-to-DDMC spatial interface condition. To our knowledge the new boundary condition is distinct from others presented in prior DDMC literature. The method is suitable for a variety of opacity distributions and may be applied to semi-relativistic radiation transport in simple fluids and geometries. Additionally, we test the code, called SuperNu, using an analytic solution having static material, as well as with a manufactured solution for moving material with structured opacities. Finally, we demonstrate with a simple source and 10 group logarithmic wavelength grid that IMC-DDMC performs better than pure IMC in terms of accuracy and speed when there are large disparities between the magnitudes of opacities in adjacent groups. We also present and test our implementation of the new boundary condition.
9. Electron transport through a quantum dot assisted by cavity photons
Abdullah, Nzar Rauf; Tang, Chi-Shung; Manolescu, Andrei; Gudmundsson, Vidar
2013-11-01
We investigate transient transport of electrons through a single quantum dot controlled by a plunger gate. The dot is embedded in a finite wire with length Lx assumed to lie along the x-direction with a parabolic confinement in the y-direction. The quantum wire, originally with hard-wall confinement at its ends, Lx/2, is weakly coupled at t = 0 to left and right leads acting as external electron reservoirs. The central system, the dot and the finite wire, is strongly coupled to a single cavity photon mode. A non-Markovian density-matrix formalism is employed to take into account the full electron-photon interaction in the transient regime. In the absence of a photon cavity, a resonant current peak can be found by tuning the plunger-gate voltage to lift a many-body state of the system into the source-drain bias window. In the presence of an x-polarized photon field, additional side peaks can be found due to photon-assisted transport. By appropriately tuning the plunger-gate voltage, the electrons in the left lead are allowed to undergo coherent inelastic scattering to a two-photon state above the bias window if initially one photon was present in the cavity. However, this photon-assisted feature is suppressed in the case of a y-polarized photon field due to the anisotropy of our system caused by its geometry.
10. Electron transport through a quantum dot assisted by cavity photons.
PubMed
Abdullah, Nzar Rauf; Tang, Chi-Shung; Manolescu, Andrei; Gudmundsson, Vidar
2013-11-20
We investigate transient transport of electrons through a single quantum dot controlled by a plunger gate. The dot is embedded in a finite wire with length Lx assumed to lie along the x-direction with a parabolic confinement in the y-direction. The quantum wire, originally with hard-wall confinement at its ends, Lx/2, is weakly coupled at t=0 to left and right leads acting as external electron reservoirs. The central system, the dot and the finite wire, is strongly coupled to a single cavity photon mode. A non-Markovian density-matrix formalism is employed to take into account the full electron-photon interaction in the transient regime. In the absence of a photon cavity, a resonant current peak can be found by tuning the plunger-gate voltage to lift a many-body state of the system into the source-drain bias window. In the presence of an x-polarized photon field, additional side peaks can be found due to photon-assisted transport. By appropriately tuning the plunger-gate voltage, the electrons in the left lead are allowed to undergo coherent inelastic scattering to a two-photon state above the bias window if initially one photon was present in the cavity. However, this photon-assisted feature is suppressed in the case of a y-polarized photon field due to the anisotropy of our system caused by its geometry. PMID:24132041
11. Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
SciTech Connect
Smith, L.M.; Hochstedler, R.D.
1997-02-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).
12. Analysis of Light Transport Features in Stone Fruits Using Monte Carlo Simulation
PubMed Central
Ding, Chizhu; Shi, Shuning; Chen, Jianjun; Wei, Wei; Tan, Zuojun
2015-01-01
The propagation of light in stone fruit tissue was modeled using the Monte Carlo (MC) method. Peaches were used as the representative model of stone fruits. The effects of the fruit core and the skin on light transport features in the peaches were assessed. It is suggested that the skin, flesh and core should be separately considered with different parameters to accurately simulate light propagation in intact stone fruit. The detection efficiency was evaluated by the percentage of effective photons and the detection sensitivity of the flesh tissue. The fruit skin decreases the detection efficiency, especially in the region close to the incident point. The choices of the source-detector distance, detection angle and source intensity were discussed. Accurate MC simulations may result in better insight into light propagation in stone fruit and aid in achieving the optimal fruit quality inspection without extensive experimental measurements. PMID:26469695
13. Acceleration of a Monte Carlo radiation transport code
SciTech Connect
Hochstedler, R.D.; Smith, L.M.
1996-03-01
Execution time for the Integrated TIGER Series (ITS) Monte Carlo radiation transport code has been reduced by careful re-coding of computationally intensive subroutines. Three test cases for the TIGER (1-D slab geometry), CYLTRAN (2-D cylindrical geometry), and ACCEPT (3-D arbitrary geometry) codes were identified and used to benchmark and profile program execution. Based upon these results, sixteen top time-consuming subroutines were examined and nine of them modified to accelerate computations with equivalent numerical output to the original. The results obtained via this study indicate that speedup factors of 1.90 for the TIGER code, 1.67 for the CYLTRAN code, and 1.11 for the ACCEPT code are achievable. {copyright} {ital 1996 American Institute of Physics.}
Bochud, Franois O.; Laedermann, Jean-Pascal; Sima, Octavian
2015-06-01
In radionuclide metrology, Monte Carlo (MC) simulation is widely used to compute parameters associated with primary measurements or calibration factors. Although MC methods are used to estimate uncertainties, the uncertainty associated with radiation transport in MC calculations is usually difficult to estimate. Counting statistics is the most obvious component of MC uncertainty and has to be checked carefully, particularly when variance reduction is used. However, in most cases fluctuations associated with counting statistics can be reduced using sufficient computing power. Cross-section data have intrinsic uncertainties that induce correlations when apparently independent codes are compared. Their effect on the uncertainty of the estimated parameter is difficult to determine and varies widely from case to case. Finally, the most significant uncertainty component for radionuclide applications is usually that associated with the detector geometry. Recent 2D and 3D x-ray imaging tools may be utilized, but comparison with experimental data as well as adjustments of parameters are usually inevitable.
15. A self-consistent electric field for Monte Carlo transport
SciTech Connect
Garabedian, P.R.
1987-01-01
The BETA transport code implements a Monte Carlo method to calculate ion and electron confinement times tau/sub i/ and tau/sub e/ for stellarator equilibria defined by the BETA equilibrium code. The magnetic field strength is represented by a double Fourier series in poloidal and toroidal angles psi and phi with coefficients depending on the toroidal flux s. A linearized drift kinetic equation determining the distribution functions of the ions and electrons is solved by a method of split time using an Adams ordinary differential equation algorithm to trace orbits and using a random walk to model the Fokker-Planck collision operator. Confinement times are estimated from exponential decay of expected values of the solution. Expected values of trigonometric functions of psi and phi serve to specify Fourier coefficients of an average over velocity space of the distribution functions.
16. Electron transport in magnetrons by a posteriori Monte Carlo simulations
Costin, C.; Minea, T. M.; Popa, G.
2014-02-01
Electron transport across magnetic barriers is crucial in all magnetized plasmas. It governs not only the plasma parameters in the volume, but also the fluxes of charged particles towards the electrodes and walls. It is particularly important in high-power impulse magnetron sputtering (HiPIMS) reactors, influencing the quality of the deposited thin films, since this type of discharge is characterized by an increased ionization fraction of the sputtered material. Transport coefficients of electron clouds released both from the cathode and from several locations in the discharge volume are calculated for a HiPIMS discharge with pre-ionization operated in argon at 0.67 Pa and for very short pulses (few s) using the a posteriori Monte Carlo simulation technique. For this type of discharge electron transport is characterized by strong temporal and spatial dependence. Both drift velocity and diffusion coefficient depend on the releasing position of the electron cloud. They exhibit minimum values at the centre of the race-track for the secondary electrons released from the cathode. The diffusion coefficient of the same electrons increases from 2 to 4 times when the cathode voltage is doubled, in the first 1.5 s of the pulse. These parameters are discussed with respect to empirical Bohm diffusion.
17. Monte Carlo study of photon fields from a flattening filter-free clinical accelerator
SciTech Connect
Vassiliev, Oleg N.; Titt, Uwe; Kry, Stephen F.; Poenisch, Falk; Gillin, Michael T.; Mohan, Radhe
2006-04-15
In conventional clinical linear accelerators, the flattening filter scatters and absorbs a large fraction of primary photons. Increasing the beam-on time, which also increases the out-of-field exposure to patients, compensates for the reduction in photon fluence. In recent years, intensity modulated radiation therapy has been introduced, yielding better dose distributions than conventional three-dimensional conformal therapy. The drawback of this method is the further increase in beam-on time. An accelerator with the flattening filter removed, which would increase photon fluence greatly, could deliver considerably higher dose rates. The objective of the present study is to investigate the dosimetric properties of 6 and 18 MV photon beams from an accelerator without a flattening filter. The dosimetric data were generated using the Monte Carlo programs BEAMnrc and DOSXYZnrc. The accelerator model was based on the Varian Clinac 2100 design. We compared depth doses, dose rates, lateral profiles, doses outside collimation, total and collimator scatter factors for an accelerator with and without a flatteneing filter. The study showed that removing the filter increased the dose rate on the central axis by a factor of 2.31 (6 MV) and 5.45 (18 MV) at a given target current. Because the flattening filter is a major source of head scatter photons, its removal from the beam line could reduce the out-of-field dose.
18. Controlling single-photon transport with three-level quantum dots in photonic crystals
Yan, Cong-Hua; Jia, Wen-Zhi; Wei, Lian-Fu
2014-03-01
We investigate how to control single-photon transport along the photonic crystal waveguide with the recent experimentally demonstrated artificial atoms [i.e., ?-type quantum dots (QDs)] [S. G. Carter et al., Nat. Photon. 7, 329 (2013), 10.1038/nphoton.2013.41] in an all-optical way. Adopting full quantum theory in real space, we analytically calculate the transport coefficients of single photons scattered by a ?-type QD embedded in single- and two-mode photonic crystal cavities (PCCs), respectively. Our numerical results clearly show that the photonic transmission properties can be exactly manipulated by adjusting the coupling strengths of waveguide-cavity and QD-cavity interactions. Specifically, for the PCC with two degenerate orthogonal polarization modes coupled to a ?-type QD with two degenerate ground states, we find that the photonic transmission spectra show three Rabi-splitting dips and the present system could serve as single-photon polarization beam splitters. The feasibility of our proposal with the current photonic crystal technique is also discussed.
19. Identifying key surface parameters for optical photon transport in GEANT4/GATE simulations.
PubMed
Nilsson, Jenny; Cuplov, Vesna; Isaksson, Mats
2015-09-01
For a scintillator used for spectrometry, the generation, transport and detection of optical photons have a great impact on the energy spectrum resolution. A complete Monte Carlo model of a scintillator includes a coupled ionizing particle and optical photon transport, which can be simulated with the GEANT4 code. The GEANT4 surface parameters control the physics processes an optical photon undergoes when reaching the surface of a volume. In this work the impact of each surface parameter on the optical transport was studied by looking at the optical spectrum: the number of detected optical photons per ionizing source particle from a large plastic scintillator, i.e. the output signal. All simulations were performed using GATE v6.2 (GEANT4 Application for Tomographic Emission). The surface parameter finish (polished, ground, front-painted or back-painted) showed the greatest impact on the optical spectrum whereas the surface parameter ?(?), which controls the surface roughness, had a relatively small impact. It was also shown how the surface parameters reflectivity and reflectivity types (specular spike, specular lobe, Lambertian and backscatter) changed the optical spectrum depending on the probability for reflection and the combination of reflectivity types. A change in the optical spectrum will ultimately have an impact on a simulated energy spectrum. By studying the optical spectra presented in this work, a GEANT4 user can predict the shift in an optical spectrum caused be the alteration of a specific surface parameter. PMID:26046519
20. Single Photon Transport through an Atomic Chain Coupled to a One-dimensional Photonic Waveguide
Liao, Zeyang; Zeng, Xiaodong; Zubairy, M. Suhail
2015-03-01
We study the dynamics of a single photon pulse travels through a linear atomic chain coupled to a one-dimensional (1D) single mode photonic waveguide. We derive a time-dependent dynamical theory for this collective many-body system which allows us to study the real time evolution of the photon transport and the atomic excitations. Our result is consistent with previous calculations when there is only one atom. For an atomic chain, the collective interaction between the atoms mediated by the waveguide mode can significantly change the dynamics of the system. The reflectivity can be tuned by changing the ratio of coupling strength and the photon linewidth or by changing the number of atoms in the chain. The reflectivity of a single photon pulse with finite bandwidth can even approach 100%. The spectrum of the reflected and transmitted photon can also be significantly different from the single atom case. Many interesting physics can occur in this system such as the photonic bandgap effects, quantum entanglement generation, Fano-type interference, superradiant effects and nonlinear frequency conversion. For engineering, this system may be used as a single photon frequency filter, single photon modulation and photon storage.
1. Parallelization of a Monte Carlo particle transport simulation code
Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.
2010-05-01
We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.
2. Phonon transport analysis of semiconductor nanocomposites using monte carlo simulations
Nanocomposites are composite materials which incorporate nanosized particles, platelets or fibers. The addition of nanosized phases into the bulk matrix can lead to significantly different material properties compared to their macrocomposite counterparts. For nanocomposites, thermal conductivity is one of the most important physical properties. Manipulation and control of thermal conductivity in nanocomposites have impacted a variety of applications. In particular, it has been shown that the phonon thermal conductivity can be reduced significantly in nanocomposites due to the increase in phonon interface scattering while the electrical conductivity can be maintained. This extraordinary property of nanocomposites has been used to enhance the energy conversion efficiency of the thermoelectric devices which is proportional to the ratio of electrical to thermal conductivity. This thesis investigates phonon transport and thermal conductivity in Si/Ge semiconductor nanocomposites through numerical analysis. The Boltzmann transport equation (BTE) is adopted for description of phonon thermal transport in the nanocomposites. The BTE employs the particle-like nature of phonons to model heat transfer which accounts for both ballistic and diffusive transport phenomenon. Due to the implementation complexity and computational cost involved, the phonon BTE is difficult to solve in its most generic form. Gray media (frequency independent phonons) is often assumed in the numerical solution of BTE using conventional methods such as finite volume and discrete ordinates methods. This thesis solves the BTE using Monte Carlo (MC) simulation technique which is more convenient and efficient when non-gray media (frequency dependent phonons) is considered. In the MC simulation, phonons are displaced inside the computational domain under the various boundary conditions and scattering effects. In this work, under the relaxation time approximation, thermal transport in the nanocomposites are computed by using both gray media and non-gray media approaches. The non-gray media simulations take into consideration the dispersion and polarization effects of phonon transport. The effects of volume fraction, size, shape and distribution of the nanowire fillers on heat flow and hence thermal conductivity are studied. In addition, the computational performances of the gray and non-gray media approaches are compared.
3. Simple beam models for Monte Carlo photon beam dose calculations in radiotherapy.
PubMed
Fix, M K; Keller, H; Regsegger, P; Born, E J
2000-12-01
Monte Carlo (code GEANT) produced 6 and 15 MV phase space (PS) data were used to define several simple photon beam models. For creating the PS data the energy of starting electrons hitting the target was tuned to get correct depth dose data compared to measurements. The modeling process used the full PS information within the geometrical boundaries of the beam including all scattered radiation of the accelerator head. Scattered radiation outside the boundaries was neglected. Photons and electrons were assumed to be radiated from point sources. Four different models were investigated which involved different ways to determine the energies and locations of beam particles in the output plane. Depth dose curves, profiles, and relative output factors were calculated with these models for six field sizes from 5x5 to 40x40cm2 and compared to measurements. Model 1 uses a photon energy spectrum independent of location in the PS plane and a constant photon fluence in this plane. Model 2 takes into account the spatial particle fluence distribution in the PS plane. A constant fluence is used again in model 3, but the photon energy spectrum depends upon the off axis position. Model 4, finally uses the spatial particle fluence distribution and off axis dependent photon energy spectra in the PS plane. Depth dose curves and profiles for field sizes up to 10x10cm2 were not model sensitive. Good agreement between measured and calculated depth dose curves and profiles for all field sizes was reached for model 4. However, increasing deviations were found for increasing field sizes for models 1-3. Large deviations resulted for the profiles of models 2 and 3. This is due to the fact that these models overestimate and underestimate the energy fluence at large off axis distances. Relative output factors consistent with measurements resulted only for model 4. PMID:11190957
4. Optimizing light transport in scintillation crystals for time-of-flight PET: an experimental and optical Monte Carlo simulation study
PubMed Central
Berg, Eric; Roncali, Emilie; Cherry, Simon R.
2015-01-01
Achieving excellent timing resolution in gamma ray detectors is crucial in several applications such as medical imaging with time-of-flight positron emission tomography (TOF-PET). Although many factors impact the overall system timing resolution, the statistical nature of scintillation light, including photon production and transport in the crystal to the photodetector, is typically the limiting factor for modern scintillation detectors. In this study, we investigated the impact of surface treatment, in particular, roughening select areas of otherwise polished crystals, on light transport and timing resolution. A custom Monte Carlo photon tracking tool was used to gain insight into changes in light collection and timing resolution that were observed experimentally: select roughening configurations increased the light collection up to 25% and improved timing resolution by 15% compared to crystals with all polished surfaces. Simulations showed that partial surface roughening caused a greater number of photons to be reflected towards the photodetector and increased the initial rate of photoelectron production. This study provides a simple method to improve timing resolution and light collection in scintillator-based gamma ray detectors, a topic of high importance in the field of TOF-PET. Additionally, we demonstrated utility of our Monte Carlo simulation tool to accurately predict the effect of altering crystal surfaces on light collection and timing resolution. PMID:26114040
5. Optimizing light transport in scintillation crystals for time-of-flight PET: an experimental and optical Monte Carlo simulation study.
PubMed
Berg, Eric; Roncali, Emilie; Cherry, Simon R
2015-06-01
Achieving excellent timing resolution in gamma ray detectors is crucial in several applications such as medical imaging with time-of-flight positron emission tomography (TOF-PET). Although many factors impact the overall system timing resolution, the statistical nature of scintillation light, including photon production and transport in the crystal to the photodetector, is typically the limiting factor for modern scintillation detectors. In this study, we investigated the impact of surface treatment, in particular, roughening select areas of otherwise polished crystals, on light transport and timing resolution. A custom Monte Carlo photon tracking tool was used to gain insight into changes in light collection and timing resolution that were observed experimentally: select roughening configurations increased the light collection up to 25% and improved timing resolution by 15% compared to crystals with all polished surfaces. Simulations showed that partial surface roughening caused a greater number of photons to be reflected towards the photodetector and increased the initial rate of photoelectron production. This study provides a simple method to improve timing resolution and light collection in scintillator-based gamma ray detectors, a topic of high importance in the field of TOF-PET. Additionally, we demonstrated utility of our Monte Carlo simulation tool to accurately predict the effect of altering crystal surfaces on light collection and timing resolution. PMID:26114040
6. Robust light transport in non-Hermitian photonic lattices.
PubMed
Longhi, Stefano; Gatti, Davide; Della Valle, Giuseppe
2015-01-01
Combating the effects of disorder on light transport in micro- and nano-integrated photonic devices is of major importance from both fundamental and applied viewpoints. In ordinary waveguides, imperfections and disorder cause unwanted back-reflections, which hinder large-scale optical integration. Topological photonic structures, a new class of optical systems inspired by quantum Hall effect and topological insulators, can realize robust transport via topologically-protected unidirectional edge modes. Such waveguides are realized by the introduction of synthetic gauge fields for photons in a two-dimensional structure, which break time reversal symmetry and enable one-way guiding at the edge of the medium. Here we suggest a different route toward robust transport of light in lower-dimensional (1D) photonic lattices, in which time reversal symmetry is broken because of the non-Hermitian nature of transport. While a forward propagating mode in the lattice is amplified, the corresponding backward propagating mode is damped, thus resulting in an asymmetric transport insensitive to disorder or imperfections in the structure. Non-Hermitian asymmetric transport can occur in tight-binding lattices with an imaginary gauge field via a non-Hermitian delocalization transition, and in periodically-driven superlattices. The possibility to observe non-Hermitian delocalization is suggested using an engineered coupled-resonator optical waveguide (CROW) structure. PMID:26314932
7. Robust light transport in non-Hermitian photonic lattices
PubMed Central
Longhi, Stefano; Gatti, Davide; Valle, Giuseppe Della
2015-01-01
Combating the effects of disorder on light transport in micro- and nano-integrated photonic devices is of major importance from both fundamental and applied viewpoints. In ordinary waveguides, imperfections and disorder cause unwanted back-reflections, which hinder large-scale optical integration. Topological photonic structures, a new class of optical systems inspired by quantum Hall effect and topological insulators, can realize robust transport via topologically-protected unidirectional edge modes. Such waveguides are realized by the introduction of synthetic gauge fields for photons in a two-dimensional structure, which break time reversal symmetry and enable one-way guiding at the edge of the medium. Here we suggest a different route toward robust transport of light in lower-dimensional (1D) photonic lattices, in which time reversal symmetry is broken because of the non-Hermitian nature of transport. While a forward propagating mode in the lattice is amplified, the corresponding backward propagating mode is damped, thus resulting in an asymmetric transport insensitive to disorder or imperfections in the structure. Non-Hermitian asymmetric transport can occur in tight-binding lattices with an imaginary gauge field via a non-Hermitian delocalization transition, and in periodically-driven superlattices. The possibility to observe non-Hermitian delocalization is suggested using an engineered coupled-resonator optical waveguide (CROW) structure. PMID:26314932
8. Robust light transport in non-Hermitian photonic lattices
Longhi, Stefano; Gatti, Davide; Valle, Giuseppe Della
2015-08-01
Combating the effects of disorder on light transport in micro- and nano-integrated photonic devices is of major importance from both fundamental and applied viewpoints. In ordinary waveguides, imperfections and disorder cause unwanted back-reflections, which hinder large-scale optical integration. Topological photonic structures, a new class of optical systems inspired by quantum Hall effect and topological insulators, can realize robust transport via topologically-protected unidirectional edge modes. Such waveguides are realized by the introduction of synthetic gauge fields for photons in a two-dimensional structure, which break time reversal symmetry and enable one-way guiding at the edge of the medium. Here we suggest a different route toward robust transport of light in lower-dimensional (1D) photonic lattices, in which time reversal symmetry is broken because of the non-Hermitian nature of transport. While a forward propagating mode in the lattice is amplified, the corresponding backward propagating mode is damped, thus resulting in an asymmetric transport insensitive to disorder or imperfections in the structure. Non-Hermitian asymmetric transport can occur in tight-binding lattices with an imaginary gauge field via a non-Hermitian delocalization transition, and in periodically-driven superlattices. The possibility to observe non-Hermitian delocalization is suggested using an engineered coupled-resonator optical waveguide (CROW) structure.
9. A Fano cavity test for Monte Carlo proton transport algorithms
SciTech Connect
Sterpin, Edmond; Sorriaux, Jefferson; Souris, Kevin; Vynckier, Stefaan; Bouchard, Hugo
2014-01-15
Purpose: In the scope of reference dosimetry of radiotherapy beams, Monte Carlo (MC) simulations are widely used to compute ionization chamber dose response accurately. Uncertainties related to the transport algorithm can be verified performing self-consistency tests, i.e., the so-called “Fano cavity test.” The Fano cavity test is based on the Fano theorem, which states that under charged particle equilibrium conditions, the charged particle fluence is independent of the mass density of the media as long as the cross-sections are uniform. Such tests have not been performed yet for MC codes simulating proton transport. The objectives of this study are to design a new Fano cavity test for proton MC and to implement the methodology in two MC codes: Geant4 and PENELOPE extended to protons (PENH). Methods: The new Fano test is designed to evaluate the accuracy of proton transport. Virtual particles with an energy ofE{sub 0} and a mass macroscopic cross section of (Σ)/(ρ) are transported, having the ability to generate protons with kinetic energy E{sub 0} and to be restored after each interaction, thus providing proton equilibrium. To perform the test, the authors use a simplified simulation model and rigorously demonstrate that the computed cavity dose per incident fluence must equal (ΣE{sub 0})/(ρ) , as expected in classic Fano tests. The implementation of the test is performed in Geant4 and PENH. The geometry used for testing is a 10 × 10 cm{sup 2} parallel virtual field and a cavity (2 × 2 × 0.2 cm{sup 3} size) in a water phantom with dimensions large enough to ensure proton equilibrium. Results: For conservative user-defined simulation parameters (leading to small step sizes), both Geant4 and PENH pass the Fano cavity test within 0.1%. However, differences of 0.6% and 0.7% were observed for PENH and Geant4, respectively, using larger step sizes. For PENH, the difference is attributed to the random-hinge method that introduces an artificial energy straggling if step size is not small enough. Conclusions: Using conservative user-defined simulation parameters, both PENH and Geant4 pass the Fano cavity test for proton transport. Our methodology is applicable to any kind of charged particle, provided that the considered MC code is able to track the charged particle considered.
10. Monte Carlo photon beam modeling and commissioning for radiotherapy dose calculation algorithm.
PubMed
Toutaoui, A; Ait chikh, S; Khelassi-Toutaoui, N; Hattali, B
2014-11-01
The aim of the present work was a Monte Carlo verification of the Multi-grid superposition (MGS) dose calculation algorithm implemented in the CMS XiO (Elekta) treatment planning system and used to calculate the dose distribution produced by photon beams generated by the linear accelerator (linac) Siemens Primus. The BEAMnrc/DOSXYZnrc (EGSnrc package) Monte Carlo model of the linac head was used as a benchmark. In the first part of the work, the BEAMnrc was used for the commissioning of a 6 MV photon beam and to optimize the linac description to fit the experimental data. In the second part, the MGS dose distributions were compared with DOSXYZnrc using relative dose error comparison and γ-index analysis (2%/2 mm, 3%/3 mm), in different dosimetric test cases. Results show good agreement between simulated and calculated dose in homogeneous media for square and rectangular symmetric fields. The γ-index analysis confirmed that for most cases the MGS model and EGSnrc doses are within 3% or 3 mm. PMID:24947967
11. Monte Carlo impurity transport modeling in the DIII-D transport
SciTech Connect
Evans, T.E.; Finkenthal, D.F.
1998-04-01
A description of the carbon transport and sputtering physics contained in the Monte Carlo Impurity (MCI) transport code is given. Examples of statistically significant carbon transport pathways are examined using MCIs unique tracking visualizer and a mechanism for enhanced carbon accumulation on the high field side of the divertor chamber is discussed. Comparisons between carbon emissions calculated with MCI and those measured in the DIII-D tokamak are described. Good qualitative agreement is found between 2D carbon emission patterns calculated with MCI and experimentally measured carbon patterns. While uncertainties in the sputtering physics, atomic data, and transport models have made quantitative comparisons with experiments more difficult, recent results using a physics based model for physical and chemical sputtering has yielded simulations with about 50% of the total carbon radiation measured in the divertor. These results and plans for future improvement in the physics models and atomic data are discussed.
12. Status of the MORSE multigroup Monte Carlo radiation transport code
SciTech Connect
Emmett, M.B.
1993-06-01
There are two versions of the MORSE multigroup Monte Carlo radiation transport computer code system at Oak Ridge National Laboratory. MORSE-CGA is the most well-known and has undergone extensive use for many years. MORSE-SGC was originally developed in about 1980 in order to restructure the cross-section handling and thereby save storage. However, with the advent of new computer systems having much larger storage capacity, that aspect of SGC has become unnecessary. Both versions use data from multigroup cross-section libraries, although in somewhat different formats. MORSE-SGC is the version of MORSE that is part of the SCALE system, but it can also be run stand-alone. Both CGA and SGC use the Multiple Array System (MARS) geometry package. In the last six months the main focus of the work on these two versions has been on making them operational on workstations, in particular, the IBM RISC 6000 family. A new version of SCALE for workstations is being released to the Radiation Shielding Information Center (RSIC). MORSE-CGA, Version 2.0, is also being released to RSIC. Both SGC and CGA have undergone other revisions recently. This paper reports on the current status of the MORSE code system.
13. Analysis of EBR-II neutron and photon physics by multidimensional transport-theory techniques
SciTech Connect
Jacqmin, R.P.; Finck, P.J.; Palmiotti, G.
1994-03-01
This paper contains a review of the challenges specific to the EBR-II core physics, a description of the methods and techniques which have been developed for addressing these challenges, and the results of some validation studies relative to power-distribution calculations. Numerical tests have shown that the VARIANT nodal code yields eigenvalue and power predictions as accurate as finite difference and discrete ordinates transport codes, at a small fraction of the cost. Comparisons with continuous-energy Monte Carlo results have proven that the errors introduced by the use of the diffusion-theory approximation in the collapsing procedure to obtain broad-group cross sections, kerma factors, and photon-production matrices, have a small impact on the EBR-II neutron/photon power distribution.
14. The difference of scoring dose to water or tissues in Monte Carlo dose calculations for low energy brachytherapy photon sources
SciTech Connect
Landry, Guillaume; Reniers, Brigitte; Pignol, Jean-Philippe; Beaulieu, Luc; Verhaegen, Frank
2011-03-15
Purpose: The goal of this work is to compare D{sub m,m} (radiation transported in medium; dose scored in medium) and D{sub w,m} (radiation transported in medium; dose scored in water) obtained from Monte Carlo (MC) simulations for a subset of human tissues of interest in low energy photon brachytherapy. Using low dose rate seeds and an electronic brachytherapy source (EBS), the authors quantify the large cavity theory conversion factors required. The authors also assess whether applying large cavity theory utilizing the sources' initial photon spectra and average photon energy induces errors related to spatial spectral variations. First, ideal spherical geometries were investigated, followed by clinical brachytherapy LDR seed implants for breast and prostate cancer patients. Methods: Two types of dose calculations are performed with the GEANT4 MC code. (1) For several human tissues, dose profiles are obtained in spherical geometries centered on four types of low energy brachytherapy sources: {sup 125}I, {sup 103}Pd, and {sup 131}Cs seeds, as well as an EBS operating at 50 kV. Ratios of D{sub w,m} over D{sub m,m} are evaluated in the 0-6 cm range. In addition to mean tissue composition, compositions corresponding to one standard deviation from the mean are also studied. (2) Four clinical breast (using {sup 103}Pd) and prostate (using {sup 125}I) brachytherapy seed implants are considered. MC dose calculations are performed based on postimplant CT scans using prostate and breast tissue compositions. PTV D{sub 90} values are compared for D{sub w,m} and D{sub m,m}. Results: (1) Differences (D{sub w,m}/D{sub m,m}-1) of -3% to 70% are observed for the investigated tissues. For a given tissue, D{sub w,m}/D{sub m,m} is similar for all sources within 4% and does not vary more than 2% with distance due to very moderate spectral shifts. Variations of tissue composition about the assumed mean composition influence the conversion factors up to 38%. (2) The ratio of D{sub 90(w,m)} over D{sub 90(m,m)} for clinical implants matches D{sub w,m}/D{sub m,m} at 1 cm from the single point sources. Conclusions: Given the small variation with distance, using conversion factors based on the emitted photon spectrum (or its mean energy) of a given source introduces minimal error. The large differences observed between scoring schemes underline the need for guidelines on choice of media for dose reporting. Providing such guidelines is beyond the scope of this work.
15. Monte Carlo calculation based on hydrogen composition of the tissue for MV photon radiotherapy.
PubMed
Demol, Benjamin; Viard, Romain; Reynaert, Nick
2015-01-01
The purpose of this study was to demonstrate that Monte Carlo treatment planning systems require tissue characterization (density and composition) as a function of CT number. A discrete set of tissue classes with a specific composition is introduced. In the current work we demonstrate that, for megavoltage photon radiotherapy, only the hydrogen content of the different tissues is of interest. This conclusion might have an impact on MRI-based dose calculations and on MVCT calibration using tissue substitutes. A stoichiometric calibration was performed, grouping tissues with similar atomic composition into 15 dosimetrically equivalent subsets. To demonstrate the importance of hydrogen, a new scheme was derived, with correct hydrogen content, complemented by oxygen (all elements differing from hydrogen are replaced by oxygen). Mass attenuation coefficients and mass stopping powers for this scheme were calculated and compared to the original scheme. Twenty-five CyberKnife treatment plans were recalculated by an in-house developed Monte Carlo system using tissue density and hydrogen content derived from the CT images. The results were compared to Monte Carlo simulations using the original stoichiometric calibration. Between 300 keV and 3 MeV, the relative difference of mass attenuation coefficients is under 1% within all subsets. Between 10 keV and 20 MeV, the relative difference of mass stopping powers goes up to 5% in hard bone and remains below 2% for all other tissue subsets. Dose-volume histograms (DVHs) of the treatment plans present no visual difference between the two schemes. Relative differences of dose indexes D98, D95, D50, D05, D02, and Dmean were analyzed and a distribution centered around zero and of standard deviation below 2% (3 σ) was established. On the other hand, once the hydrogen content is slightly modified, important dose differences are obtained. Monte Carlo dose planning in the field of megavoltage photon radiotherapy is fully achievable using only hydrogen content of tissues, a conclusion that might impact MRI dose calculation, but can also help selecting the optimal tissue substitutes when calibrat-ing MVCT devices. PMID:26699320
16. Effect of transverse magnetic fields on dose distribution and RBE of photon beams: comparing PENELOPE and EGS4 Monte Carlo codes
Nettelbeck, H.; Takacs, G. J.; Rosenfeld, A. B.
2008-09-01
The application of a strong transverse magnetic field to a volume undergoing irradiation by a photon beam can produce localized regions of dose enhancement and dose reduction. This study uses the PENELOPE Monte Carlo code to investigate the effect of a slice of uniform transverse magnetic field on a photon beam using different magnetic field strengths and photon beam energies. The maximum and minimum dose yields obtained in the regions of dose enhancement and dose reduction are compared to those obtained with the EGS4 Monte Carlo code in a study by Li et al (2001), who investigated the effect of a slice of uniform transverse magnetic field (1 to 20 Tesla) applied to high-energy photon beams. PENELOPE simulations yielded maximum dose enhancements and dose reductions as much as 111% and 77%, respectively, where most results were within 6% of the EGS4 result. Further PENELOPE simulations were performed with the Sheikh-Bagheri and Rogers (2002) input spectra for 6, 10 and 15 MV photon beams, yielding results within 4% of those obtained with the Mohan et al (1985) spectra. Small discrepancies between a few of the EGS4 and PENELOPE results prompted an investigation into the influence of the PENELOPE elastic scattering parameters C1 and C2 and low-energy electron and photon transport cut-offs. Repeating the simulations with smaller scoring bins improved the resolution of the regions of dose enhancement and dose reduction, especially near the magnetic field boundaries where the dose deposition can abruptly increase or decrease. This study also investigates the effect of a magnetic field on the low-energy electron spectrum that may correspond to a change in the radiobiological effectiveness (RBE). Simulations show that the increase in dose is achieved predominantly through the lower energy electron population.
17. Simulating photon scattering effects in structurally detailed ventricular models using a Monte Carlo approach
PubMed Central
Bishop, Martin J.; Plank, Gernot
2014-01-01
Light scattering during optical imaging of electrical activation within the heart is known to significantly distort the optically-recorded action potential (AP) upstroke, as well as affecting the magnitude of the measured response of ventricular tissue to strong electric shocks. Modeling approaches based on the photon diffusion equation have recently been instrumental in quantifying and helping to understand the origin of the resulting distortion. However, they are unable to faithfully represent regions of non-scattering media, such as small cavities within the myocardium which are filled with perfusate during experiments. Stochastic Monte Carlo (MC) approaches allow simulation and tracking of individual photon packets as they propagate through tissue with differing scattering properties. Here, we present a novel application of the MC method of photon scattering simulation, applied for the first time to the simulation of cardiac optical mapping signals within unstructured, tetrahedral, finite element computational ventricular models. The method faithfully allows simulation of optical signals over highly-detailed, anatomically-complex MR-based models, including representations of fine-scale anatomy and intramural cavities. We show that optical action potential upstroke is prolonged close to large subepicardial vessels than further away from vessels, at times having a distinct humped morphology. Furthermore, we uncover a novel mechanism by which photon scattering effects around vessels cavities interact with virtual-electrode regions of strong de-/hyper-polarized tissue surrounding cavities during shocks, significantly reducing the apparent optically-measured epicardial polarization. We therefore demonstrate the importance of this novel optical mapping simulation approach along with highly anatomically-detailed models to fully investigate electrophysiological phenomena driven by fine-scale structural heterogeneity. PMID:25309442
18. Monte Carlo-based revised values of dose rate constants at discrete photon energies
PubMed Central
Selvam, T. Palani; Shrivastava, Vandana; Chourasiya, Ghanashyam; Babu, D. Appala Raju
2014-01-01
Absorbed dose rate to water at 0.2 cm and 1 cm due to a point isotropic photon source as a function of photon energy is calculated using the EDKnrc user-code of the EGSnrc Monte Carlo system. This code system utilized widely used XCOM photon cross-section dataset for the calculation of absorbed dose to water. Using the above dose rates, dose rate constants are calculated. Air-kerma strength Sk needed for deriving dose rate constant is based on the mass-energy absorption coefficient compilations of Hubbell and Seltzer published in the year 1995. A comparison of absorbed dose rates in water at the above distances to the published values reflects the differences in photon cross-section dataset in the low-energy region (difference is up to 2% in dose rate values at 1 cm in the energy range 3050 keV and up to 4% at 0.2 cm at 30 keV). A maximum difference of about 8% is observed in the dose rate value at 0.2 cm at 1.75 MeV when compared to the published value. Sk calculations based on the compilation of Hubbell and Seltzer show a difference of up to 2.5% in the low-energy region (2050 keV) when compared to the published values. The deviations observed in the values of dose rate and Sk affect the values of dose rate constants up to 3%. PMID:24600166
19. A Monte Carlo simulation for predicting photon return from sodium laser guide star
Feng, Lu; Kibblewhite, Edward; Jin, Kai; Xue, Suijian; Shen, Zhixia; Bo, Yong; Zuo, Junwei; Wei, Kai
2015-10-01
Sodium laser guide star is an ideal source for astronomical adaptive optics system correcting wave-front aberration caused by atmospheric turbulence. However, the cost and difficulties to manufacture a compact high quality sodium laser with power higher than 20W is not a guarantee that the laser will provide a bright enough laser guide star due to the physics of sodium atom in the atmosphere. It would be helpful if a prediction tool could provide the estimation of photon generating performance for arbitrary laser output formats, before an actual laser were designed. Based on rate equation, we developed a Monte Carlo simulation software that could be used to predict sodium laser guide star generating performance for arbitrary laser formats. In this paper, we will describe the model of our simulation, its implementation and present comparison results with field test data.
20. Evaluation of Electron Contamination in Cancer Treatment with Megavoltage Photon Beams: Monte Carlo Study
PubMed Central
Seif, F.; Bayatiani, M. R.
2015-01-01
Background Megavoltage beams used in radiotherapy are contaminated with secondary electrons. Different parts of linac head and air above patient act as a source of this contamination. This contamination can increase damage to skin and subcutaneous tissue during radiotherapy. Monte Carlo simulation is an accurate method for dose calculation in medical dosimetry and has an important role in optimization of linac head materials. The aim of this study was to calculate electron contamination of Varian linac. Materials and Method The 6MV photon beam of Varian (2100 C/D) linac was simulated by Monte Carlo code, MCNPX, based on its companys instructions. The validation was done by comparing the calculated depth dose and profiles of simulation with dosimetry measurements in a water phantom (error less than 2%). The Percentage Depth Dose (PDDs), profiles and contamination electron energy spectrum were calculated for different therapeutic field sizes (55 to 4040 cm2) for both linacs. Results The dose of electron contamination was observed to rise with increase in field size. The contribution of the secondary contamination electrons on the surface dose was 6% for 55 cm2 to 27% for 4040 cm2, respectively. Conclusion Based on the results, the effect of electron contamination on patient surface dose cannot be ignored, so the knowledge of the electron contamination is important in clinical dosimetry. It must be calculated for each machine and considered in Treatment Planning Systems. PMID:25973409
1. Extension of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes to 100 GeV
SciTech Connect
Miller, S.G.
1988-08-01
Version 2.1 of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes was modified to extend their ability to model interactions up to 100 GeV. Benchmarks against experimental results conducted at 10 and 15 GeV confirm the accuracy of the extended codes. 12 refs., 2 figs., 2 tabs.
2. Detailed calculation of inner-shell impact ionization to use in photon transport codes
Fernandez, Jorge E.; Scot, Viviana; Verardi, Luca; Salvat, Francesc
2014-02-01
Secondary electrons can modify the intensity of the XRF characteristic lines by means of a mechanism known as inner-shell impact ionization (ISII). The ad-hoc code KERNEL (which calls the PENELOPE package) has been used to characterize the electron correction in terms of angular, spatial and energy distributions. It is demonstrated that the angular distribution of the characteristic photons due to ISII can be safely considered as isotropic, and that the source of photons from electron interactions is well represented as a point source. The energy dependence of the correction is described using an analytical model in the energy range 1-150 keV, for all the emission lines (K, L and M) of the elements with atomic numbers Z=11-92. It is introduced a new photon kernel comprising the correction due to ISII, suitable to be adopted in photon transport codes (deterministic or Monte Carlo) with a minimal effort. The impact of the correction is discussed for the most intense K (Kα1,Kα2,Kβ1) and L (Lα1,Lα2) lines.
3. FASTER 3: A generalized-geometry Monte Carlo computer program for the transport of neutrons and gamma rays. Volume 2: Users manual
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
A description of the FASTER-III program for Monte Carlo Carlo calculation of photon and neutron transport in complex geometries is presented. Major revisions include the capability of calculating minimum weight shield configurations for primary and secondary radiation and optimal importance sampling parameters. The program description includes a users manual describing the preparation of input data cards, the printout from a sample problem including the data card images, definitions of Fortran variables, the program logic, and the control cards required to run on the IBM 7094, IBM 360, UNIVAC 1108 and CDC 6600 computers.
4. FZ2MC: A Tool for Monte Carlo Transport Code Geometry Manipulation
SciTech Connect
Hackel, B M; Nielsen Jr., D E; Procassini, R J
2009-02-25
The process of creating and validating combinatorial geometry representations of complex systems for use in Monte Carlo transport simulations can be both time consuming and error prone. To simplify this process, a tool has been developed which employs extensions of the Form-Z commercial solid modeling tool. The resultant FZ2MC (Form-Z to Monte Carlo) tool permits users to create, modify and validate Monte Carlo geometry and material composition input data. Plugin modules that export this data to an input file, as well as parse data from existing input files, have been developed for several Monte Carlo codes. The FZ2MC tool is envisioned as a 'universal' tool for the manipulation of Monte Carlo geometry and material data. To this end, collaboration on the development of plug-in modules for additional Monte Carlo codes is desired.
5. Detector-selection technique for Monte Carlo transport in azimuthally symmetric geometries
SciTech Connect
Hoffman, T.J.; Tang, J.S.; Parks, C.V.
1982-01-01
Many radiation transport problems contain geometric symmetries which are not exploited in obtaining their Monte Carlo solutions. An important class of problems is that in which the geometry is symmetric about an axis. These problems arise in the analyses of a reactor core or shield, spent fuel shipping casks, tanks containing radioactive solutions, radiation transport in the atmosphere (air-over-ground problems), etc. Although amenable to deterministic solution, such problems can often be solved more efficiently and accurately with the Monte Carlo method. For this class of problems, a technique is described in this paper which significantly reduces the variance of the Monte Carlo-calculated effect of interest at point detectors.
6. Utilization of a Photon Transport Code to Investigate Radiation Therapy Treatment Planning Quantities and Techniques.
Palta, Jatinder Raj
A versatile computer program MORSE, based on neutron and photon transport theory has been utilized to investigate radiation therapy treatment planning quantities and techniques. A multi-energy group representation of transport equation provides a concise approach in utilizing Monte Carlo numerical techniques to multiple radiation therapy treatment planning problems. A general three dimensional geometry is used to simulate radiation therapy treatment planning problems in configurations of an actual clinical setting. Central axis total and scattered dose distributions for homogeneous and inhomogeneous water phantoms are calculated and the correction factor for lung and bone inhomogeneities are also evaluated. Results show that Monte Carlo calculations based on multi-energy group transport theory predict the depth dose distributions that are in good agreement with available experimental data. Improved correction factors based on the concepts of lung-air-ratio and bone-air-ratio are proposed in lieu of the presently used correction factors that are based on tissue-air-ratio power law method for inhomogeneity corrections. Central axis depth dose distributions for a bremsstrahlung spectrum from a linear accelerator is also calculated to exhibit the versatility of the computer program in handling multiple radiation therapy problems. A novel approach is undertaken to study the dosimetric properties of brachytherapy sources. Dose rate constants for various radionuclides are calculated from the numerically generated dose rate versus source energy curves. Dose rates can also be generated for any point brachytherapy source with any arbitrary energy spectrum at various radial distances from this family of curves.
7. Determination of peripheral underdosage at the lung-tumor interface using Monte Carlo radiation transport calculations
SciTech Connect
Taylor, Michael; Dunn, Leon; Kron, Tomas; Height, Felicity; Franich, Rick
2012-04-01
Prediction of dose distributions in close proximity to interfaces is difficult. In the context of radiotherapy of lung tumors, this may affect the minimum dose received by lesions and is particularly important when prescribing dose to covering isodoses. The objective of this work is to quantify underdosage in key regions around a hypothetical target using Monte Carlo dose calculation methods, and to develop a factor for clinical estimation of such underdosage. A systematic set of calculations are undertaken using 2 Monte Carlo radiation transport codes (EGSnrc and GEANT4). Discrepancies in dose are determined for a number of parameters, including beam energy, tumor size, field size, and distance from chest wall. Calculations were performed for 1-mm{sup 3} regions at proximal, distal, and lateral aspects of a spherical tumor, determined for a 6-MV and a 15-MV photon beam. The simulations indicate regions of tumor underdose at the tumor-lung interface. Results are presented as ratios of the dose at key peripheral regions to the dose at the center of the tumor, a point at which the treatment planning system (TPS) predicts the dose more reliably. Comparison with TPS data (pencil-beam convolution) indicates such underdosage would not have been predicted accurately in the clinic. We define a dose reduction factor (DRF) as the average of the dose in the periphery in the 6 cardinal directions divided by the central dose in the target, the mean of which is 0.97 and 0.95 for a 6-MV and 15-MV beam, respectively. The DRF can assist clinicians in the estimation of the magnitude of potential discrepancies between prescribed and delivered dose distributions as a function of tumor size and location. Calculation for a systematic set of 'generic' tumors allows application to many classes of patient case, and is particularly useful for interpreting clinical trial data.
8. SHIELD-HIT12A - a Monte Carlo particle transport program for ion therapy research
Bassler, N.; Hansen, D. C.; Lühr, A.; Thomsen, B.; Petersen, J. B.; Sobolevsky, N.
2014-03-01
Purpose: The Monte Carlo (MC) code SHIELD-HIT simulates the transport of ions through matter. Since SHIELD-HIT08 we added numerous features that improves speed, usability and underlying physics and thereby the user experience. The "-A" fork of SHIELD-HIT also aims to attach SHIELD-HIT to a heavy ion dose optimization algorithm to provide MC-optimized treatment plans that include radiobiology. Methods: SHIELD-HIT12A is written in FORTRAN and carefully retains platform independence. A powerful scoring engine is implemented scoring relevant quantities such as dose and track-average LET. It supports native formats compatible with the heavy ion treatment planning system TRiP. Stopping power files follow ICRU standard and are generated using the libdEdx library, which allows the user to choose from a multitude of stopping power tables. Results: SHIELD-HIT12A runs on Linux and Windows platforms. We experienced that new users quickly learn to use SHIELD-HIT12A and setup new geometries. Contrary to previous versions of SHIELD-HIT, the 12A distribution comes along with easy-to-use example files and an English manual. A new implementation of Vavilov straggling resulted in a massive reduction of computation time. Scheduled for later release are CT import and photon-electron transport. Conclusions: SHIELD-HIT12A is an interesting alternative ion transport engine. Apart from being a flexible particle therapy research tool, it can also serve as a back end for a MC ion treatment planning system. More information about SHIELD-HIT12A and a demo version can be found on http://www.shieldhit.org.
9. Comparative analysis of discrete and continuous absorption weighting estimators used in Monte Carlo simulations of radiative transport in turbid media
PubMed Central
Hayakawa, Carole K.; Spanier, Jerome; Venugopalan, Vasan
2014-01-01
We examine the relative error of Monte Carlo simulations of radiative transport that employ two commonly used estimators that account for absorption differently, either discretely, at interaction points, or continuously, between interaction points. We provide a rigorous derivation of these discrete and continuous absorption weighting estimators within a stochastic model that we show to be equivalent to an analytic model, based on the radiative transport equation (RTE). We establish that both absorption weighting estimators are unbiased and, therefore, converge to the solution of the RTE. An analysis of spatially resolved reflectance predictions provided by these two estimators reveals no advantage to either in cases of highly scattering and highly anisotropic media. However, for moderate to highly absorbing media or isotropically scattering media, the discrete estimator provides smaller errors at proximal source locations while the continuous estimator provides smaller errors at distal locations. The origin of these differing variance characteristics can be understood through examination of the distribution of exiting photon weights. PMID:24562029
10. 3D Monte Carlo model of optical transport in laser-irradiated cutaneous vascular malformations
Majaron, Boris; Milani?, Matija; Jia, Wangcun; Nelson, J. S.
2010-11-01
We have developed a three-dimensional Monte Carlo (MC) model of optical transport in skin and applied it to analysis of port wine stain treatment with sequential laser irradiation and intermittent cryogen spray cooling. Our MC model extends the approaches of the popular multi-layer model by Wang et al.1 to three dimensions, thus allowing treatment of skin inclusions with more complex geometries and arbitrary irradiation patterns. To overcome the obvious drawbacks of either "escape" or "mirror" boundary conditions at the lateral boundaries of the finely discretized volume of interest (VOI), photons exiting the VOI are propagated in laterally infinite tissue layers with appropriate optical properties, until they loose all their energy, escape into the air, or return to the VOI, but the energy deposition outside of the VOI is not computed and recorded. After discussing the selection of tissue parameters, we apply the model to analysis of blood photocoagulation and collateral thermal damage in treatment of port wine stain (PWS) lesions with sequential laser irradiation and intermittent cryogen spray cooling.
11. LDRD project 151362 : low energy electron-photon transport.
SciTech Connect
Kensek, Ronald Patrick; Hjalmarson, Harold Paul; Magyar, Rudolph J.; Bondi, Robert James; Crawford, Martin James
2013-09-01
At sufficiently high energies, the wavelengths of electrons and photons are short enough to only interact with one atom at time, leading to the popular %E2%80%9Cindependent-atom approximation%E2%80%9D. We attempted to incorporate atomic structure in the generation of cross sections (which embody the modeled physics) to improve transport at lower energies. We document our successes and failures. This was a three-year LDRD project. The core team consisted of a radiation-transport expert, a solid-state physicist, and two DFT experts.
12. Monte Carlo calculations of correction factors for plastic phantoms in clinical photon and electron beam dosimetry
SciTech Connect
Araki, Fujio; Hanyu, Yuji; Fukuoka, Miyoko; Matsumoto, Kenji; Okumura, Masahiko; Oguchi, Hiroshi
2009-07-15
The purpose of this study is to calculate correction factors for plastic water (PW) and plastic water diagnostic-therapy (PWDT) phantoms in clinical photon and electron beam dosimetry using the EGSnrc Monte Carlo code system. A water-to-plastic ionization conversion factor k{sub pl} for PW and PWDT was computed for several commonly used Farmer-type ionization chambers with different wall materials in the range of 4-18 MV photon beams. For electron beams, a depth-scaling factor c{sub pl} and a chamber-dependent fluence correction factor h{sub pl} for both phantoms were also calculated in combination with NACP-02 and Roos plane-parallel ionization chambers in the range of 4-18 MeV. The h{sub pl} values for the plane-parallel chambers were evaluated from the electron fluence correction factor {phi}{sub pl}{sup w} and wall correction factors P{sub wall,w} and P{sub wall,pl} for a combination of water or plastic materials. The calculated k{sub pl} and h{sub pl} values were verified by comparison with the measured values. A set of k{sub pl} values computed for the Farmer-type chambers was equal to unity within 0.5% for PW and PWDT in photon beams. The k{sub pl} values also agreed within their combined uncertainty with the measured data. For electron beams, the c{sub pl} values computed for PW and PWDT were from 0.998 to 1.000 and from 0.992 to 0.997, respectively, in the range of 4-18 MeV. The {phi}{sub pl}{sup w} values for PW and PWDT were from 0.998 to 1.001 and from 1.004 to 1.001, respectively, at a reference depth in the range of 4-18 MeV. The difference in P{sub wall} between water and plastic materials for the plane-parallel chambers was 0.8% at a maximum. Finally, h{sub pl} values evaluated for plastic materials were equal to unity within 0.6% for NACP-02 and Roos chambers. The h{sub pl} values also agreed within their combined uncertainty with the measured data. The absorbed dose to water from ionization chamber measurements in PW and PWDT plastic materials corresponds to that in water within 1%. Both phantoms can thus be used as a substitute for water for photon and electron dosimetry.
13. Few-photon transport in many-body photonic systems: A scattering approach
Lee, Changhyoup; Noh, Changsuk; Schetakis, Nikolaos; Angelakis, Dimitris G.
2015-12-01
We study the quantum transport of multiphoton Fock states in one-dimensional Bose-Hubbard lattices implemented in QED cavity arrays (QCAs). We propose an optical scheme to probe the underlying many-body states of the system by analyzing the properties of the transmitted light using scattering theory. To this end, we employ the Lippmann-Schwinger formalism within which an analytical form of the scattering matrix can be found. The latter is evaluated explicitly for the two-particle, two-site case which we use to study the resonance properties of two-photon scattering, as well as the scattering probabilities and the second-order intensity correlations of the transmitted light. The results indicate that the underlying structure of the many-body states of the model in question can be directly inferred from the physical properties of the transported photons in its QCA realization. We find that a fully resonant two-photon scattering scenario allows a faithful characterization of the underlying many-body states, unlike in the coherent driving scenario usually employed in quantum master-equation treatments. The effects of losses in the cavities, as well as the incoming photons' pulse shapes and initial correlations, are studied and analyzed. Our method is general and can be applied to probe the structure of any many-body bosonic model amenable to a QCA implementation, including the Jaynes-Cummings-Hubbard model, the extended Bose-Hubbard model, as well as a whole range of spin models.
14. Monte Carlo simulation of small electron fields collimated by the integrated photon MLC
Mihaljevic, Josip; Soukup, Martin; Dohm, Oliver; Alber, Markus
2011-02-01
In this study, a Monte Carlo (MC)-based beam model for an ELEKTA linear accelerator was established. The beam model is based on the EGSnrc Monte Carlo code, whereby electron beams with nominal energies of 10, 12 and 15 MeV were considered. For collimation of the electron beam, only the integrated photon multi-leaf-collimators (MLCs) were used. No additional secondary or tertiary add-ons like applicators, cutouts or dedicated electron MLCs were included. The source parameters of the initial electron beam were derived semi-automatically from measurements of depth-dose curves and lateral profiles in a water phantom. A routine to determine the initial electron energy spectra was developed which fits a Gaussian spectrum to the most prominent features of depth-dose curves. The comparisons of calculated and measured depth-dose curves demonstrated agreement within 1%/1 mm. The source divergence angle of initial electrons was fitted to lateral dose profiles beyond the range of electrons, where the imparted dose is mainly due to bremsstrahlung produced in the scattering foils. For accurate modelling of narrow beam segments, the influence of air density on dose calculation was studied. The air density for simulations was adjusted to local values (433 m above sea level) and compared with the standard air supplied by the ICRU data set. The results indicate that the air density is an influential parameter for dose calculations. Furthermore, the default value of the BEAMnrc parameter 'skin depth' for the boundary crossing algorithm was found to be inadequate for the modelling of small electron fields. A higher value for this parameter eliminated discrepancies in too broad dose profiles and an increased dose along the central axis. The beam model was validated with measurements, whereby an agreement mostly within 3%/3 mm was found.
15. Monte Carlo simulation of small electron fields collimated by the integrated photon MLC.
PubMed
Mihaljevic, Josip; Soukup, Martin; Dohm, Oliver; Alber, Markus
2011-02-01
In this study, a Monte Carlo (MC)-based beam model for an ELEKTA linear accelerator was established. The beam model is based on the EGSnrc Monte Carlo code, whereby electron beams with nominal energies of 10, 12 and 15 MeV were considered. For collimation of the electron beam, only the integrated photon multi-leaf-collimators (MLCs) were used. No additional secondary or tertiary add-ons like applicators, cutouts or dedicated electron MLCs were included. The source parameters of the initial electron beam were derived semi-automatically from measurements of depth-dose curves and lateral profiles in a water phantom. A routine to determine the initial electron energy spectra was developed which fits a Gaussian spectrum to the most prominent features of depth-dose curves. The comparisons of calculated and measured depth-dose curves demonstrated agreement within 1%/1 mm. The source divergence angle of initial electrons was fitted to lateral dose profiles beyond the range of electrons, where the imparted dose is mainly due to bremsstrahlung produced in the scattering foils. For accurate modelling of narrow beam segments, the influence of air density on dose calculation was studied. The air density for simulations was adjusted to local values (433 m above sea level) and compared with the standard air supplied by the ICRU data set. The results indicate that the air density is an influential parameter for dose calculations. Furthermore, the default value of the BEAMnrc parameter 'skin depth' for the boundary crossing algorithm was found to be inadequate for the modelling of small electron fields. A higher value for this parameter eliminated discrepancies in too broad dose profiles and an increased dose along the central axis. The beam model was validated with measurements, whereby an agreement mostly within 3%/3 mm was found. PMID:21242628
16. CAD based Monte Carlo method: Algorithms for geometric evaluation in support of Monte Carlo radiation transport calculation
Wang, Mengkuo
In particle transport computations, the Monte Carlo simulation method is a widely used algorithm. There are several Monte Carlo codes available that perform particle transport simulations. However the geometry packages and geometric modeling capability of Monte Carlo codes are limited as they can not handle complicated geometries made up of complex surfaces. Previous research exists that take advantage of the modeling capabilities of CAD software. The two major approaches are the Converter approach and the CAD engine based approach. By carefully analyzing the strategies and algorithms of these two approaches, the CAD engine based approach has peen identified as the more promising approach. Though currently the performance of this approach is not satisfactory, there is room for improvement. The development and implementation of an improved CAD based approach is the focus of this thesis. Algorithms to accelerate the CAD engine based approach are studied. The major acceleration algorithm is the Oriented Bounding Box algorithm, which is used in computer graphics. The difference in application between computer graphics and particle transport has been considered and the algorithm has been modified for particle transport. The major work of this thesis has been the development of the MCNPX/CGM code and the testing, benchmarking and implementation of the acceleration algorithms. MCNPX is a Monte Carlo code and CGM is a CAD geometry engine. A facet representation of the geometry provided the least slowdown of the Monte Carlo code. The CAD model generates the facet representation. The Oriented Bounding Box algorithm was the fastest acceleration technique adopted for this work. The slowdown of the MCNPX/CGM to MCNPX was reduced to a factor of 3 when the facet model is used. MCNPX/CGM has been successfully validated against test problems in medical physics and a fusion energy device. MCNPX/CGM gives exactly the same results as the standard MCNPX when an MCNPX geometry model is available. For the case of the complicated fusion device---the stellerator, the MCNPX/CGM's results closely match a one-dimension model calculation performed by ARIES team.
17. ACCELERATING FUSION REACTOR NEUTRONICS MODELING BY AUTOMATIC COUPLING OF HYBRID MONTE CARLO/DETERMINISTIC TRANSPORT ON CAD GEOMETRY
SciTech Connect
Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W; Grove, Robert E
2015-01-01
18. Monte Carlo linear accelerator simulation of megavoltage photon beams: Independent determination of initial beam parameters
SciTech Connect
Almberg, Sigrun Saur; Frengen, Jomar; Kylling, Arve; Lindmo, Tore
2012-01-15
19. Monte Carlo modelling of positron transport in real world applications
Marjanović, S.; Banković, A.; Šuvakov, M.; Petrović, Z. Lj
2014-05-01
Due to the unstable nature of positrons and their short lifetime, it is difficult to obtain high positron particle densities. This is why the Monte Carlo simulation technique, as a swarm method, is very suitable for modelling most of the current positron applications involving gaseous and liquid media. The ongoing work on the measurements of cross-sections for positron interactions with atoms and molecules and swarm calculations for positrons in gasses led to the establishment of good cross-section sets for positron interaction with gasses commonly used in real-world applications. Using the standard Monte Carlo technique and codes that can follow both low- (down to thermal energy) and high- (up to keV) energy particles, we are able to model different systems directly applicable to existing experimental setups and techniques. This paper reviews the results on modelling Surko-type positron buffer gas traps, application of the rotating wall technique and simulation of positron tracks in water vapor as a substitute for human tissue, and pinpoints the challenges in and advantages of applying Monte Carlo simulations to these systems.
20. Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics.
PubMed
Doronin, Alexander; Meglinski, Igor
2012-09-01
In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy. PMID:23085901
1. Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics
Doronin, Alexander; Meglinski, Igor
2012-09-01
In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.
2. PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code
SciTech Connect
Iandola, F N; O'Brien, M J; Procassini, R J
2010-11-29
Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.
3. MC-PEPTITA: A Monte Carlo model for Photon, Electron and Positron Tracking In Terrestrial AtmosphereApplication for a terrestrial gamma ray flash
Sarria, D.; Blelly, P.-L.; Forme, F.
2015-05-01
Terrestrial gamma ray flashes are natural bursts of X and gamma rays, correlated to thunderstorms, that are likely to be produced at an altitude of about 10 to 20 km. After the emission, the flux of gamma rays is filtered and altered by the atmosphere and a small part of it may be detected by a satellite on low Earth orbit (RHESSI or Fermi, for example). Thus, only a residual part of the initial burst can be measured and most of the flux is made of scattered primary photons and of secondary emitted electrons, positrons, and photons. Trying to get information on the initial flux from the measurement is a very complex inverse problem, which can only be tackled by the use of a numerical model solving the transport of these high-energy particles. For this purpose, we developed a numerical Monte Carlo model which solves the transport in the atmosphere of both relativistic electrons/positrons and X/gamma rays. It makes it possible to track the photons, electrons, and positrons in the whole Earth environment (considering the atmosphere and the magnetic field) to get information on what affects the transport of the particles from the source region to the altitude of the satellite. We first present the MC-PEPTITA model, and then we validate it by comparison with a benchmark GEANT4 simulation with similar settings. Then, we show the results of a simulation close to Fermi event number 091214 in order to discuss some important properties of the photons and electrons/positrons that are reaching satellite altitude.
4. Physical models, cross sections, and numerical approximations used in MCNP and GEANT4 Monte Carlo codes for photon and electron absorbed fraction calculation
SciTech Connect
Yoriyaz, Helio; Moralles, Mauricio; Tarso Dalledone Siqueira, Paulo de; Costa Guimaraes, Carla da; Belonsi Cintra, Felipe; Santos, Adimir dos
2009-11-15
Purpose: Radiopharmaceutical applications in nuclear medicine require a detailed dosimetry estimate of the radiation energy delivered to the human tissues. Over the past years, several publications addressed the problem of internal dose estimate in volumes of several sizes considering photon and electron sources. Most of them used Monte Carlo radiation transport codes. Despite the widespread use of these codes due to the variety of resources and potentials they offered to carry out dose calculations, several aspects like physical models, cross sections, and numerical approximations used in the simulations still remain an object of study. Accurate dose estimate depends on the correct selection of a set of simulation options that should be carefully chosen. This article presents an analysis of several simulation options provided by two of the most used codes worldwide: MCNP and GEANT4. Methods: For this purpose, comparisons of absorbed fraction estimates obtained with different physical models, cross sections, and numerical approximations are presented for spheres of several sizes and composed as five different biological tissues. Results: Considerable discrepancies have been found in some cases not only between the different codes but also between different cross sections and algorithms in the same code. Maximum differences found between the two codes are 5.0% and 10%, respectively, for photons and electrons.Conclusion: Even for simple problems as spheres and uniform radiation sources, the set of parameters chosen by any Monte Carlo code significantly affects the final results of a simulation, demonstrating the importance of the correct choice of parameters in the simulation.
5. A GAMOS plug-in for GEANT4 based Monte Carlo simulation of radiation-induced light transport in biological media.
PubMed
Glaser, Adam K; Kanick, Stephen C; Zhang, Rongxiao; Arce, Pedro; Pogue, Brian W
2013-05-01
We describe a tissue optics plug-in that interfaces with the GEANT4/GAMOS Monte Carlo (MC) architecture, providing a means of simulating radiation-induced light transport in biological media for the first time. Specifically, we focus on the simulation of light transport due to the ?erenkov effect (light emission from charged particle's traveling faster than the local speed of light in a given medium), a phenomenon which requires accurate modeling of both the high energy particle and subsequent optical photon transport, a dynamic coupled process that is not well-described by any current MC framework. The results of validation simulations show excellent agreement with currently employed biomedical optics MC codes, [i.e., Monte Carlo for Multi-Layered media (MCML), Mesh-based Monte Carlo (MMC), and diffusion theory], and examples relevant to recent studies into detection of ?erenkov light from an external radiation beam or radionuclide are presented. While the work presented within this paper focuses on radiation-induced light transport, the core features and robust flexibility of the plug-in modified package make it also extensible to more conventional biomedical optics simulations. The plug-in, user guide, example files, as well as the necessary files to reproduce the validation simulations described within this paper are available online at http://www.dartmouth.edu/optmed/research-projects/monte-carlo-software. PMID:23667790
6. A GAMOS plug-in for GEANT4 based Monte Carlo simulation of radiation-induced light transport in biological media
PubMed Central
Glaser, Adam K.; Kanick, Stephen C.; Zhang, Rongxiao; Arce, Pedro; Pogue, Brian W.
2013-01-01
We describe a tissue optics plug-in that interfaces with the GEANT4/GAMOS Monte Carlo (MC) architecture, providing a means of simulating radiation-induced light transport in biological media for the first time. Specifically, we focus on the simulation of light transport due to the ?erenkov effect (light emission from charged particles traveling faster than the local speed of light in a given medium), a phenomenon which requires accurate modeling of both the high energy particle and subsequent optical photon transport, a dynamic coupled process that is not well-described by any current MC framework. The results of validation simulations show excellent agreement with currently employed biomedical optics MC codes, [i.e., Monte Carlo for Multi-Layered media (MCML), Mesh-based Monte Carlo (MMC), and diffusion theory], and examples relevant to recent studies into detection of ?erenkov light from an external radiation beam or radionuclide are presented. While the work presented within this paper focuses on radiation-induced light transport, the core features and robust flexibility of the plug-in modified package make it also extensible to more conventional biomedical optics simulations. The plug-in, user guide, example files, as well as the necessary files to reproduce the validation simulations described within this paper are available online at http://www.dartmouth.edu/optmed/research-projects/monte-carlo-software. PMID:23667790
7. FASTER 3: A generalized-geometry Monte Carlo computer program for the transport of neutrons and gamma rays. Volume 1: Summary report
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
The theory used in FASTER-III, a Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries, is outlined. The program includes the treatment of geometric regions bounded by quadratic and quadric surfaces with multiple radiation sources which have specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. It can also calculate minimum weight shield configuration meeting a specified dose rate constraint. Results are presented for sample problems involving primary neutron, and primary and secondary photon, transport in a spherical reactor shield configuration.
8. A fully coupled Monte Carlo/discrete ordinates solution to the neutron transport equation. Final report
SciTech Connect
Filippone, W.L.; Baker, R.S.
1990-12-31
The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by themselves. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo region may comprise the entire spatial region for selected energy groups, or may consist of a rectangular area that is either completely or partially embedded in an arbitrary S{sub N} region. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and volumetric sources. The hybrid method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and volumetric sources, and linkage subrountines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating S{sub N} calculations. The special-purpose Monte Carlo routines used are essentially analog, with few variance reduction techniques employed. However, the routines have been successfully vectorized, with approximately a factor of five increase in speed over the non-vectorized version.
9. Investigation of a probe design for facilitating the uses of the standard photon diffusion equation at short source-detector separations: Monte Carlo simulations
Tseng, Sheng-Hao; Hayakawa, Carole; Spanier, Jerome; Durkin, Anthony J.
2009-09-01
We design a special diffusing probe to investigate the optical properties of human skin in vivo. The special geometry of the probe enables a modified two-layer (MTL) diffusion model to precisely describe the photon transport even when the source-detector separation is shorter than 3 mean free paths. We provide a frequency domain comparison between the Monte Carlo model and the diffusion model in both the MTL geometry and conventional semiinfinite geometry. We show that using the Monte Carlo model as a benchmark method, the MTL diffusion theory performs better than the diffusion theory in the semiinfinite geometry. In addition, we carry out Monte Carlo simulations with the goal of investigating the dependence of the interrogation depth of this probe on several parameters including source-detector separation, sample optical properties, and properties of the diffusing high-scattering layer. From the simulations, we find that the optical properties of samples modulate the interrogation volume greatly, and the source-detector separation and the thickness of the diffusing layer are the two dominant probe parameters that impact the interrogation volume. Our simulation results provide design guidelines for a MTL geometry probe.
10. Hypersensitive Transport in Photonic Crystals with Accidental Spatial Degeneracies.
PubMed
Makri, Eleana; Smith, Kyle; Chabanov, Andrey; Vitebskiy, Ilya; Kottos, Tsampikos
2016-01-01
A localized mode in a photonic layered structure can develop nodal points (nodal planes), where the oscillating electric field is negligible. Placing a thin metallic layer at such a nodal point results in the phenomenon of induced transmission. Here we demonstrate that if the nodal point is not a point of symmetry, then even a tiny alteration of the permittivity in the vicinity of the metallic layer drastically suppresses the localized mode along with the resonant transmission. This renders the layered structure highly reflective within a broad frequency range. Applications of this hypersensitive transport for optical and microwave limiting and switching are discussed. PMID:26903232
11. Hypersensitive Transport in Photonic Crystals with Accidental Spatial Degeneracies
PubMed Central
Makri, Eleana; Smith, Kyle; Chabanov, Andrey; Vitebskiy, Ilya; Kottos, Tsampikos
2016-01-01
A localized mode in a photonic layered structure can develop nodal points (nodal planes), where the oscillating electric field is negligible. Placing a thin metallic layer at such a nodal point results in the phenomenon of induced transmission. Here we demonstrate that if the nodal point is not a point of symmetry, then even a tiny alteration of the permittivity in the vicinity of the metallic layer drastically suppresses the localized mode along with the resonant transmission. This renders the layered structure highly reflective within a broad frequency range. Applications of this hypersensitive transport for optical and microwave limiting and switching are discussed. PMID:26903232
12. Monte Carlo study of photon beams from medical linear accelerators: Optimization, benchmark and spectra
Sheikh-Bagheri, Daryoush
1999-12-01
BEAM is a general purpose EGS4 user code for simulating radiotherapy sources (Rogers et al. Med. Phys. 22, 503-524, 1995). The BEAM code is optimized by first minimizing unnecessary electron transport (a factor of 3 improvement in efficiency). The efficiency of the uniform bremsstrahlung splitting (UBS) technique is assessed and found to be 4 times more efficient. The Russian Roulette technique used in conjunction with UBS is substantially modified to make simulations additionally 2 times more efficient. Finally, a novel and robust technique, called selective bremsstrahlung splitting (SBS), is developed and shown to improve the efficiency of photon beam simulations by an additional factor of 3-4, depending on the end- point considered. The optimized BEAM code is benchmarked by comparing calculated and measured ionization distributions in water from the 10 and 20 MV photon beams of the NRCC linac. Unlike previous calculations, the incident e - energy is known independently to 1%, the entire extra-focal radiation is simulated and e- contamination is accounted for. Both beams use clinical jaws, whose dimensions are accurately measured, and which are set for a 10 x 10 cm2 field at 110 cm. At both energies, the calculated and the measured values of ionization on the central-axis in the buildup region agree within 1% of maximum dose. The agreement is well within statistics elsewhere on the central-axis. Ionization profiles match within 1% of maximum dose, except at the geometrical edges of the field, where the disagreement is up to 5% of dose maximum. Causes for this discrepancy are discussed. The benchmarked BEAM code is then used to simulate beams from the major commercial medical linear accelerators. The off-axis factors are matched within statistical uncertainties, for most of the beams at the 1 ? level and for all at the 2 ? level. The calculated and measured depth-dose data agree within 1% (local dose), at about 1% (1 ? level) statistics, at all depths past depth of maximum dose for almost all beams. The calculated photon spectra and average energy distributions are compared to those published by Mohan et al. and decomposed into direct and scattered photon components.
13. Verification measurements and clinical evaluation of the iPlan RT Monte Carlo dose algorithm for 6 MV photon energy
Petoukhova, A. L.; van Wingerden, K.; Wiggenraad, R. G. J.; van de Vaart, P. J. M.; van Egmond, J.; Franken, E. M.; van Santvoort, J. P. C.
2010-08-01
This study presents data for verification of the iPlan RT Monte Carlo (MC) dose algorithm (BrainLAB, Feldkirchen, Germany). MC calculations were compared with pencil beam (PB) calculations and verification measurements in phantoms with lung-equivalent material, air cavities or bone-equivalent material to mimic head and neck and thorax and in an Alderson anthropomorphic phantom. Dosimetric accuracy of MC for the micro-multileaf collimator (MLC) simulation was tested in a homogeneous phantom. All measurements were performed using an ionization chamber and Kodak EDR2 films with Novalis 6 MV photon beams. Dose distributions measured with film and calculated with MC in the homogeneous phantom are in excellent agreement for oval, C and squiggle-shaped fields and for a clinical IMRT plan. For a field with completely closed MLC, MC is much closer to the experimental result than the PB calculations. For fields larger than the dimensions of the inhomogeneities the MC calculations show excellent agreement (within 3%/1 mm) with the experimental data. MC calculations in the anthropomorphic phantom show good agreement with measurements for conformal beam plans and reasonable agreement for dynamic conformal arc and IMRT plans. For 6 head and neck and 15 lung patients a comparison of the MC plan with the PB plan was performed. Our results demonstrate that MC is able to accurately predict the dose in the presence of inhomogeneities typical for head and neck and thorax regions with reasonable calculation times (5-20 min). Lateral electron transport was well reproduced in MC calculations. We are planning to implement MC calculations for head and neck and lung cancer patients.
14. Comparison of space radiation calculations for deterministic and Monte Carlo transport codes
Lin, Zi-Wei; Adams, James; Barghouty, Abdulnasser; Randeniya, Sharmalee; Tripathi, Ram; Watts, John; Yepes, Pablo
15. Monte Carlo calculated correction factors for diodes and ion chambers in small photon fields.
PubMed
Czarnecki, D; Zink, K
2013-04-21
The application of small photon fields in modern radiotherapy requires the determination of total scatter factors Scp or field factors ?(f(clin), f(msr))(Q(clin), Q(msr)) with high precision. Both quantities require the knowledge of the field-size-dependent and detector-dependent correction factor k(f(clin), f(msr))(Q(clin), Q(msr)). The aim of this study is the determination of the correction factor k(f(clin), f(msr))(Q(clin), Q(msr)) for different types of detectors in a clinical 6 MV photon beam of a Siemens KD linear accelerator. The EGSnrc Monte Carlo code was used to calculate the dose to water and the dose to different detectors to determine the field factor as well as the mentioned correction factor for different small square field sizes. Besides this, the mean water to air stopping power ratio as well as the ratio of the mean energy absorption coefficients for the relevant materials was calculated for different small field sizes. As the beam source, a Monte Carlo based model of a Siemens KD linear accelerator was used. The results show that in the case of ionization chambers the detector volume has the largest impact on the correction factor k(f(clin), f(msr))(Q(clin), Q(msr)); this perturbation may contribute up to 50% to the correction factor. Field-dependent changes in stopping-power ratios are negligible. The magnitude of k(f(clin), f(msr))(Q(clin), Q(msr)) is of the order of 1.2 at a field size of 1 1 cm(2) for the large volume ion chamber PTW31010 and is still in the range of 1.05-1.07 for the PinPoint chambers PTW31014 and PTW31016. For the diode detectors included in this study (PTW60016, PTW 60017), the correction factor deviates no more than 2% from unity in field sizes between 10 10 and 1 1 cm(2), but below this field size there is a steep decrease of k(f(clin), f(msr))(Q(clin), Q(msr)) below unity, i.e. a strong overestimation of dose. Besides the field size and detector dependence, the results reveal a clear dependence of the correction factor on the accelerator geometry for field sizes below 1 1 cm(2), i.e. on the beam spot size of the primary electrons hitting the target. This effect is especially pronounced for the ionization chambers. In conclusion, comparing all detectors, the unshielded diode PTW60017 is highly recommended for small field dosimetry, since its correction factor k(f(clin), f(msr))(Q(clin), Q(msr)) is closest to unity in small fields and mainly independent of the electron beam spot size. PMID:23514734
16. Effect of statistical fluctuation in Monte Carlo based photon beam dose calculation on gamma index evaluation
Jiang Graves, Yan; Jia, Xun; Jiang, Steve B.
2013-03-01
The ?-index test has been commonly adopted to quantify the degree of agreement between a reference dose distribution and an evaluation dose distribution. Monte Carlo (MC) simulation has been widely used for the radiotherapy dose calculation for both clinical and research purposes. The goal of this work is to investigate both theoretically and experimentally the impact of the MC statistical fluctuation on the ?-index test when the fluctuation exists in the reference, the evaluation, or both dose distributions. To the first order approximation, we theoretically demonstrated in a simplified model that the statistical fluctuation tends to overestimate ?-index values when existing in the reference dose distribution and underestimate ?-index values when existing in the evaluation dose distribution given the original ?-index is relatively large for the statistical fluctuation. Our numerical experiments using realistic clinical photon radiation therapy cases have shown that (1) when performing a ?-index test between an MC reference dose and a non-MC evaluation dose, the average ?-index is overestimated and the gamma passing rate decreases with the increase of the statistical noise level in the reference dose; (2) when performing a ?-index test between a non-MC reference dose and an MC evaluation dose, the average ?-index is underestimated when they are within the clinically relevant range and the gamma passing rate increases with the increase of the statistical noise level in the evaluation dose; (3) when performing a ?-index test between an MC reference dose and an MC evaluation dose, the gamma passing rate is overestimated due to the statistical noise in the evaluation dose and underestimated due to the statistical noise in the reference dose. We conclude that the ?-index test should be used with caution when comparing dose distributions computed with MC simulation.
17. Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes
SciTech Connect
Frambati, S.; Frignani, M.
2012-07-01
We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design for radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)
18. Update On the Status of the FLUKA Monte Carlo Transport Code*
NASA Technical Reports Server (NTRS)
Ferrari, A.; Lorenzo-Sentis, M.; Roesler, S.; Smirnov, G.; Sommerer, F.; Theis, C.; Vlachoudis, V.; Carboni, M.; Mostacci, A.; Pelliccioni, M.
2006-01-01
The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. We review the progress achieved since the last CHEP Conference on the physics models, some technical improvements to the code and some recent applications. From the point of view of the physics, improvements have been made with the extension of PEANUT to higher energies for p, n, pi, pbar/nbar and for nbars down to the lowest energies, the addition of the online capability to evolve radioactive products and get subsequent dose rates, upgrading of the treatment of EM interactions with the elimination of the need to separately prepare preprocessed files. A new coherent photon scattering model, an updated treatment of the photo-electric effect, an improved pair production model, new photon cross sections from the LLNL Cullen database have been implemented. In the field of nucleus-- nucleus interactions the electromagnetic dissociation of heavy ions has been added along with the extension of the interaction models for some nuclide pairs to energies below 100 MeV/A using the BME approach, as well as the development of an improved QMD model for intermediate energies. Both DPMJET 2.53 and 3 remain available along with rQMD 2.4 for heavy ion interactions above 100 MeV/A. Technical improvements include the ability to use parentheses in setting up the combinatorial geometry, the introduction of pre-processor directives in the input stream. a new random number generator with full 64 bit randomness, new routines for mathematical special functions (adapted from SLATEC). Finally, work is progressing on the deployment of a user-friendly GUI input interface as well as a CAD-like geometry creation and visualization tool. On the application front, FLUKA has been used to extensively evaluate the potential space radiation effects on astronauts for future deep space missions, the activation dose for beam target areas, dose calculations for radiation therapy as well as being adapted for use in the simulation of events in the ALICE detector at the LHC.
19. Photon energy-modulated radiotherapy: Monte Carlo simulation and treatment planning study
SciTech Connect
Park, Jong Min; Kim, Jung-in; Heon Choi, Chang; Chie, Eui Kyu; Kim, Il Han; Ye, Sung-Joon
2012-03-15
Purpose: To demonstrate the feasibility of photon energy-modulated radiotherapy during beam-on time. Methods: A cylindrical device made of aluminum was conceptually proposed as an energy modulator. The frame of the device was connected with 20 tubes through which mercury could be injected or drained to adjust the thickness of mercury along the beam axis. In Monte Carlo (MC) simulations, a flattening filter of 6 or 10 MV linac was replaced with the device. The thickness of mercury inside the device varied from 0 to 40 mm at the field sizes of 5 x 5 cm{sup 2} (FS5), 10 x 10 cm{sup 2} (FS10), and 20 x 20 cm{sup 2} (FS20). At least 5 billion histories were followed for each simulation to create phase space files at 100 cm source to surface distance (SSD). In-water beam data were acquired by additional MC simulations using the above phase space files. A treatment planning system (TPS) was commissioned to generate a virtual machine using the MC-generated beam data. Intensity modulated radiation therapy (IMRT) plans for six clinical cases were generated using conventional 6 MV, 6 MV flattening filter free, and energy-modulated photon beams of the virtual machine. Results: As increasing the thickness of mercury, Percentage depth doses (PDD) of modulated 6 and 10 MV after the depth of dose maximum were continuously increased. The amount of PDD increase at the depth of 10 and 20 cm for modulated 6 MV was 4.8% and 5.2% at FS5, 3.9% and 5.0% at FS10 and 3.2%-4.9% at FS20 as increasing the thickness of mercury from 0 to 20 mm. The same for modulated 10 MV was 4.5% and 5.0% at FS5, 3.8% and 4.7% at FS10 and 4.1% and 4.8% at FS20 as increasing the thickness of mercury from 0 to 25 mm. The outputs of modulated 6 MV with 20 mm mercury and of modulated 10 MV with 25 mm mercury were reduced into 30%, and 56% of conventional linac, respectively. The energy-modulated IMRT plans had less integral doses than 6 MV IMRT or 6 MV flattening filter free plans for tumors located in the periphery while maintaining the similar quality of target coverage, homogeneity, and conformity. Conclusions: The MC study for the designed energy modulator demonstrated the feasibility of energy-modulated photon beams available during beam-on time. The planning study showed an advantage of energy-and intensity modulated radiotherapy in terms of integral dose without sacrificing any quality of IMRT plan.
20. Time series analysis of Monte Carlo neutron transport calculations
Nease, Brian Robert
A time series based approach is applied to the Monte Carlo (MC) fission source distribution to calculate the non-fundamental mode eigenvalues of the system. The approach applies Principal Oscillation Patterns (POPs) to the fission source distribution, transforming the problem into a simple autoregressive order one (AR(1)) process. Proof is provided that the stationary MC process is linear to first order approximation, which is a requirement for the application of POPs. The autocorrelation coefficient of the resulting AR(1) process corresponds to the ratio of the desired mode eigenvalue to the fundamental mode eigenvalue. All modern k-eigenvalue MC codes calculate the fundamental mode eigenvalue, so the desired mode eigenvalue can be easily determined. The strength of this approach is contrasted against the Fission Matrix method (FMM) in terms of accuracy versus computer memory constraints. Multi-dimensional problems are considered since the approach has strong potential for use in reactor analysis, and the implementation of the method into production codes is discussed. Lastly, the appearance of complex eigenvalues is investigated and solutions are provided.
1. Modeling bioluminescent photon transport in tissue based on Radiosity-diffusion model
Sun, Li; Wang, Pu; Tian, Jie; Zhang, Bo; Han, Dong; Yang, Xin
2010-03-01
Bioluminescence tomography (BLT) is one of the most important non-invasive optical molecular imaging modalities. The model for the bioluminescent photon propagation plays a significant role in the bioluminescence tomography study. Due to the high computational efficiency, diffusion approximation (DA) is generally applied in the bioluminescence tomography. But the diffusion equation is valid only in highly scattering and weakly absorbing regions and fails in non-scattering or low-scattering tissues, such as a cyst in the breast, the cerebrospinal fluid (CSF) layer of the brain and synovial fluid layer in the joints. A hybrid Radiosity-diffusion model is proposed for dealing with the non-scattering regions within diffusing domains in this paper. This hybrid method incorporates a priori information of the geometry of non-scattering regions, which can be acquired by magnetic resonance imaging (MRI) or x-ray computed tomography (CT). Then the model is implemented using a finite element method (FEM) to ensure the high computational efficiency. Finally, we demonstrate that the method is comparable with Mont Carlo (MC) method which is regarded as a 'gold standard' for photon transportation simulation.
2. Delocalization of electrons by cavity photons in transport through a quantum dot molecule
Abdullah, Nzar Rauf; Tang, Chi-Shung; Manolescu, Andrei; Gudmundsson, Vidar
2014-11-01
We present results on cavity-photon-assisted electron transport through two lateral quantum dots embedded in a finite quantum wire. The double quantum dot system is weakly connected to two leads and strongly coupled to a single quantized photon cavity mode with initially two linearly polarized photons in the cavity. Including the full electron-photon interaction, the transient current controlled by a plunger-gate in the central system is studied by using quantum master equation. Without a photon cavity, two resonant current peaks are observed in the range selected for the plunger gate voltage: The ground state peak, and the peak corresponding to the first-excited state. The current in the ground state is higher than in the first-excited state due to their different symmetry. In a photon cavity with the photon field polarized along or perpendicular to the transport direction, two extra side peaks are found, namely, photon-replica of the ground state and photon-replica of the first-excited state. The side-peaks are caused by photon-assisted electron transport, with multiphoton absorption processes for up to three photons during an electron tunneling process. The inter-dot tunneling in the ground state can be controlled by the photon cavity in the case of the photon field polarized along the transport direction. The electron charge is delocalized from the dots by the photon cavity. Furthermore, the current in the photon-induced side-peaks can be strongly enhanced by increasing the electron-photon coupling strength for the case of photons polarized along the transport direction.
3. Light transport and lasing in complex photonic structures
Liew, Seng Fatt
Complex photonic structures refer to composite optical materials with dielectric constant varying on length scales comparable to optical wavelengths. Light propagation in such heterogeneous composites is greatly different from homogeneous media due to scattering of light in all directions. Interference of these scattered light waves gives rise to many fascinating phenomena and it has been a fast growing research area, both for its fundamental physics and for its practical applications. In this thesis, we have investigated the optical properties of photonic structures with different degree of order, ranging from periodic to random. The first part of this thesis consists of numerical studies of the photonic band gap (PBG) effect in structures from 1D to 3D. From these studies, we have observed that PBG effect in a 1D photonic crystal is robust against uncorrelated disorder due to preservation of long-range positional order. However, in higher dimensions, the short-range positional order alone is sufficient to form PBGs in 2D and 3D photonic amorphous structures (PASS). We have identified several parameters including dielectric filling fraction and degree of order that can be tuned to create a broad isotropic PBG. The largest PBG is produced by the dielectric networks due to local uniformity in their dielectric constant distribution. In addition, we also show that deterministic aperiodic structures (DASs) such as the golden-angle spiral and topological defect structures can support a wide PBG and their optical resonances contain unexpected features compared to those in photonic crystals. Another growing research field based on complex photonic structures is the study of structural color in animals and plants. Previous studies have shown that non-iridescent color can be generated from PASs via single or double scatterings. For better understanding of the coloration mechanisms, we have measured the wavelength-dependent scattering length from the biomimetic samples. Our theoretical modeling and analysis explains why single scattering of light is dominant over multiple scattering in similar biological structures and is responsible for color generation. In collaboration with evolutionary biologists, we examine how closely-related species and populations of butterflies have evolved their structural color. We have used artificial selection on a lab model butterfly to evolve violet color from an ultra-violet brown color. The same coloration mechanism is found in other blue/violet species that have evolved their color in nature, which implies the same evolution path for their nanostructure. While the absorption of light is ubiquitous in nature and in applications, the question remains how absorption modifies the transmission in random media. Therefore, we numerically study the effects of optical absorption on the highest transmission states in a two-dimensional disordered waveguide. Our results show that strong absorption turns the highest transmission channel in random media from diffusive to ballistic-like transport. Finally, we have demonstrated lasing mode selection in a nearly circular semiconductor microdisk laser by shaping the spatial profile of the pump beam. Despite of strong mode overlap, selective pumping suppresses the competing lasing modes by either increasing their thresholds or reducing their power slopes. As a result, we can switch both the lasing frequency and the output direction. This powerful technique can have potential application as an on-chip tunable light source.
4. Dosimetric impact of monoenergetic photon beams in the small-animal irradiation with inhomogeneities: A Monte Carlo evaluation
Chow, James C. L.
2013-05-01
This study investigated the variations of the dose and dose distribution in a small-animal irradiation due to the photon beam energy and presence of inhomogeneity. Based on the same mouse computed tomography image set, three Monte Carlo phantoms namely, inhomogeneous, homogeneous and bone-tissue phantoms were used in this study. These phantoms were generated by overriding the relative electron density of no voxel (inhomogeneous), all voxel (homogeneous) and the bone voxel (bone-tissue) to one. 360 photon arcs with beam energies of 50-1250 kV were used in mouse irradiations. Doses in the above phantoms were calculated using the EGSnrc-based DOSXYZnrc code through the DOSCTP. It was found that the dose conformity increased with the increase of the photon beam energy from the kV to MV range. For the inhomogeneous mouse phantom, increasing the photon beam energy from 50 kV to 1250 kV increased about 21 times the dose deposited at the isocenter. For the bone dose enhancement, the mean dose was 1.4 times higher when the bone inhomogeneity was not neglected using the 50 kV photon beams in the mouse irradiation. Bone dose enhancement affecting the mean dose in the mouse irradiation can be found in the photon beams with energy range of 50-200 kV, and the dose enhancement decreases with an increase of the beam energy. Moreover, the MV photon beam has a higher dose at the isocenter, and a better dose conformity compared to the kV beam.
5. Selection of voxel size and photon number in voxel-based Monte Carlo method: criteria and applications.
PubMed
Li, Dong; Chen, Bin; Ran, Wei Yu; Wang, Guo Xiang; Wu, Wen Juan
2015-09-01
The voxel-based Monte Carlo method (VMC) is now a gold standard in the simulation of light propagation in turbid media. For complex tissue structures, however, the computational cost will be higher when small voxels are used to improve smoothness of tissue interface and a large number of photons are used to obtain accurate results. To reduce computational cost, criteria were proposed to determine the voxel size and photon number in 3-dimensional VMC simulations with acceptable accuracy and computation time. The selection of the voxel size can be expressed as a function of tissue geometry and optical properties. The photon number should be at least 5 times the total voxel number. These criteria are further applied in developing a photon ray splitting scheme of local grid refinement technique to reduce computational cost of a nonuniform tissue structure with significantly varying optical properties. In the proposed technique, a nonuniform refined grid system is used, where fine grids are used for the tissue with high absorption and complex geometry, and coarse grids are used for the other part. In this technique, the total photon number is selected based on the voxel size of the coarse grid. Furthermore, the photon-splitting scheme is developed to satisfy the statistical accuracy requirement for the dense grid area. Result shows that local grid refinement technique photon ray splitting scheme can accelerate the computation by 7.6 times (reduce time consumption from 17.5 to 2.3 h) in the simulation of laser light energy deposition in skin tissue that contains port wine stain lesions. PMID:26417866
6. Monte Carlo simulation on pre-clinical irradiation: A heterogeneous phantom study on monoenergetic kilovoltage photon beams
Chow, James C. L.
2012-10-01
This study investigated radiation dose variations in pre-clinical irradiation due to the photon beam energy and presence of tissue heterogeneity. Based on the same mouse computed tomography image dataset, three phantoms namely, heterogeneous, homogeneous and bone homogeneous were used. These phantoms were generated by overriding the relative electron density of no voxel (heterogeneous), all voxel (homogeneous) and the bone voxel (bone homogeneous) to one. 360 photon arcs with beam energies of 50 - 1250 keV were used in mouse irradiations. Doses in the above phantoms were calculated using the EGSnrc-based DOSXYZnrc code through the DOSCTP. Monte Carlo simulations were carried out in parallel using multiple nodes in a high-performance computing cluster. It was found that the dose conformity increased with the increase of the photon beam energy from the keV to MeV range. For the heterogeneous mouse phantom, increasing the photon beam energy from 50 keV to 1250 keV increased seven times the dose deposited at the isocenter. For the bone dose enhancement, the mean dose was 2.7 times higher when the bone heterogeneity was not neglected using the 50 keV photon beams in the mouse irradiation. Bone dose enhancement affecting the mean dose was found in the photon beams with energy range of 50 - 200 keV and the dose enhancement decreased with an increase of the beam energy. Moreover, the MeV photon beam had a higher dose at the isocenter, and a better dose conformity compared to the keV beam.
7. Selection of voxel size and photon number in voxel-based Monte Carlo method: criteria and applications
Li, Dong; Chen, Bin; Ran, Wei Yu; Wang, Guo Xiang; Wu, Wen Juan
2015-09-01
The voxel-based Monte Carlo method (VMC) is now a gold standard in the simulation of light propagation in turbid media. For complex tissue structures, however, the computational cost will be higher when small voxels are used to improve smoothness of tissue interface and a large number of photons are used to obtain accurate results. To reduce computational cost, criteria were proposed to determine the voxel size and photon number in 3-dimensional VMC simulations with acceptable accuracy and computation time. The selection of the voxel size can be expressed as a function of tissue geometry and optical properties. The photon number should be at least 5 times the total voxel number. These criteria are further applied in developing a photon ray splitting scheme of local grid refinement technique to reduce computational cost of a nonuniform tissue structure with significantly varying optical properties. In the proposed technique, a nonuniform refined grid system is used, where fine grids are used for the tissue with high absorption and complex geometry, and coarse grids are used for the other part. In this technique, the total photon number is selected based on the voxel size of the coarse grid. Furthermore, the photon-splitting scheme is developed to satisfy the statistical accuracy requirement for the dense grid area. Result shows that local grid refinement technique photon ray splitting scheme can accelerate the computation by 7.6 times (reduce time consumption from 17.5 to 2.3 h) in the simulation of laser light energy deposition in skin tissue that contains port wine stain lesions.
8. Coupling Deterministic and Monte Carlo Transport Methods for the Simulation of Gamma-Ray Spectroscopy Scenarios
SciTech Connect
Smith, Leon E.; Gesh, Christopher J.; Pagh, Richard T.; Miller, Erin A.; Shaver, Mark W.; Ashbaker, Eric D.; Batdorf, Michael T.; Ellis, J. E.; Kaye, William R.; McConn, Ronald J.; Meriwether, George H.; Ressler, Jennifer J.; Valsan, Andrei B.; Wareing, Todd A.
2008-10-31
Radiation transport modeling methods used in the radiation detection community fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are typically the tool of choice for simulating gamma-ray spectrometers operating in homeland and national security settings (e.g. portal monitoring of vehicles or isotope identification using handheld devices), but deterministic codes that discretize the linear Boltzmann transport equation in space, angle, and energy offer potential advantages in computational efficiency for many complex radiation detection problems. This paper describes the development of a scenario simulation framework based on deterministic algorithms. Key challenges include: formulating methods to automatically define an energy group structure that can support modeling of gamma-ray spectrometers ranging from low to high resolution; combining deterministic transport algorithms (e.g. ray-tracing and discrete ordinates) to mitigate ray effects for a wide range of problem types; and developing efficient and accurate methods to calculate gamma-ray spectrometer response functions from the deterministic angular flux solutions. The software framework aimed at addressing these challenges is described and results from test problems that compare coupled deterministic-Monte Carlo methods and purely Monte Carlo approaches are provided.
9. High-resolution Monte Carlo simulation of flow and conservative transport in heterogeneous porous media 2. Transport results
USGS Publications Warehouse
Naff, R.L.; Haley, D.F.; Sudicky, E.A.
1998-01-01
In this, the second of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, results from the transport aspect of these simulations are reported on. Transport simulations contained herein assume a finite pulse input of conservative tracer, and the numerical technique endeavors to realistically simulate tracer spreading as the cloud moves through a heterogeneous medium. Medium heterogeneity is limited to the hydraulic conductivity field, and generation of this field assumes that the hydraulic- conductivity process is second-order stationary. Methods of estimating cloud moments, and the interpretation of these moments, are discussed. Techniques for estimation of large-time macrodispersivities from cloud second-moment data, and for the approximation of the standard errors associated with these macrodispersivities, are also presented. These moment and macrodispersivity estimation techniques were applied to tracer clouds resulting from transport scenarios generated by specific Monte Carlo simulations. Where feasible, moments and macrodispersivities resulting from the Monte Carlo simulations are compared with first- and second-order perturbation analyses. Some limited results concerning the possible ergodic nature of these simulations, and the presence of non- Gaussian behavior of the mean cloud, are reported on as well.
10. Lorentz force correction to the Boltzmann radiation transport equation and its implications for Monte Carlo algorithms
Bouchard, Hugo; Bielajew, Alex
2015-07-01
To establish a theoretical framework for generalizing Monte Carlo transport algorithms by adding external electromagnetic fields to the Boltzmann radiation transport equation in a rigorous and consistent fashion. Using first principles, the Boltzmann radiation transport equation is modified by adding a term describing the variation of the particle distribution due to the Lorentz force. The implications of this new equation are evaluated by investigating the validity of Fanos theorem. Additionally, Lewis approach to multiple scattering theory in infinite homogeneous media is redefined to account for the presence of external electromagnetic fields. The equation is modified and yields a description consistent with the deterministic laws of motion as well as probabilistic methods of solution. The time-independent Boltzmann radiation transport equation is generalized to account for the electromagnetic forces in an additional operator similar to the interaction term. Fanos and Lewis approaches are stated in this new equation. Fanos theorem is found not to apply in the presence of electromagnetic fields. Lewis theory for electron multiple scattering and moments, accounting for the coupling between the Lorentz force and multiple elastic scattering, is found. However, further investigation is required to develop useful algorithms for Monte Carlo and deterministic transport methods. To test the accuracy of Monte Carlo transport algorithms in the presence of electromagnetic fields, the Fano cavity test, as currently defined, cannot be applied. Therefore, new tests must be designed for this specific application. A multiple scattering theory that accurately couples the Lorentz force with elastic scattering could improve Monte Carlo efficiency. The present study proposes a new theoretical framework to develop such algorithms.
11. Lorentz force correction to the Boltzmann radiation transport equation and its implications for Monte Carlo algorithms.
PubMed
Bouchard, Hugo; Bielajew, Alex
2015-07-01
To establish a theoretical framework for generalizing Monte Carlo transport algorithms by adding external electromagnetic fields to the Boltzmann radiation transport equationin a rigorous and consistent fashion. Using first principles, the Boltzmann radiation transport equationis modified by adding a term describing the variation of the particle distribution due to the Lorentz force. The implications of this new equationare evaluated by investigating the validity of Fano's theorem. Additionally, Lewis' approach to multiple scattering theory in infinite homogeneous media is redefined to account for the presence of external electromagnetic fields. The equationis modified and yields a description consistent with the deterministic laws of motion as well as probabilistic methods of solution. The time-independent Boltzmann radiation transport equationis generalized to account for the electromagnetic forces in an additional operator similar to the interaction term. Fano's and Lewis' approaches are stated in this new equation. Fano's theorem is found not to apply in the presence of electromagnetic fields. Lewis' theory for electron multiple scattering and moments, accounting for the coupling between the Lorentz force and multiple elastic scattering, is found. However, further investigation is required to develop useful algorithms for Monte Carlo and deterministic transport methods. To test the accuracy of Monte Carlo transport algorithms in the presence of electromagnetic fields, the Fano cavity test, as currently defined, cannot be applied. Therefore, new tests must be designed for this specific application. A multiple scattering theory that accurately couples the Lorentz force with elastic scattering could improve Monte Carlo efficiency. The present study proposes a new theoretical framework to develop such algorithms. PMID:26061045
12. The Monte Carlo approach to transport modeling in deca-nanometer MOSFETs
Sangiorgi, Enrico; Palestri, Pierpaolo; Esseni, David; Fiegna, Claudio; Selmi, Luca
2008-09-01
In this paper, we review recent developments of the Monte Carlo approach to the simulation of semi-classical carrier transport in nano-MOSFETs, with particular focus on the inclusion of quantum-mechanical effects in the simulation (using either the multi-subband approach or quantum corrections to the electrostatic potential) and on the numerical stability issues related to the coupling of the transport with the Poisson equation. Selected applications are presented, including the analysis of quasi-ballistic transport, the determination of the RF characteristics of deca-nanometric MOSFETs, and the study of non-conventional device structures and channel materials.
13. Correlated few-photon transport in one-dimensional waveguides: Linear and nonlinear dispersions
SciTech Connect
Roy, Dibyendu
2011-04-15
We address correlated few-photon transport in one-dimensional waveguides coupled to a two-level system (TLS), such as an atom or a quantum dot. We derive exactly the single-photon and two-photon current (transmission) for linear and nonlinear (tight-binding sinusoidal) energy-momentum dispersion relations of photons in the waveguides and compare the results for the different dispersions. A large enhancement of the two-photon current for the sinusoidal dispersion has been seen at a certain transition energy of the TLS away from the single-photon resonances.
14. Radiation dose measurements and Monte Carlo calculations for neutron and photon reactions in a human head phantom for accelerator-based boron neutron capture therapy
Kim, Don-Soo
15. Data decomposition of Monte Carlo particle transport simulations via tally servers
SciTech Connect
Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit; Smith, Kord
2013-11-01
An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.
16. Electron transport in radiotherapy using local-to-global Monte Carlo
SciTech Connect
Svatos, M.M.; Chandler, W.P.; Siantar, C.L.H.; Rathkopf, J.A.; Ballinger, C.T.; Neuenschwander, H.; Mackie, T.R.; Reckwerdt, P.J.
1994-09-01
Local-to-Global (L-G) Monte Carlo methods are a way to make three-dimensional electron transport both fast and accurate relative to other Monte Carlo methods. This is achieved by breaking the simulation into two stages: a local calculation done over small geometries having the size and shape of the steps to be taken through the mesh; and a global calculation which relies on a stepping code that samples the stored results of the local calculation. The increase in speed results from taking fewer steps in the global calculation than required by ordinary Monte Carlo codes and by speeding up the calculation per step. The potential for accuracy comes from the ability to use long runs of detailed codes to compile probability distribution functions (PDFs) in the local calculation. Specific examples of successful Local-to-Global algorithms are given.
17. Dosimetric variation due to the photon beam energy in the small-animal irradiation: A Monte Carlo study
SciTech Connect
Chow, James C. L.; Leung, Michael K. K.; Lindsay, Patricia E.; Jaffray, David A.
2010-10-15
Purpose: The impact of photon beam energy and tissue heterogeneities on dose distributions and dosimetric characteristics such as point dose, mean dose, and maximum dose was investigated in the context of small-animal irradiation using Monte Carlo simulations based on the EGSnrc code. Methods: Three Monte Carlo mouse phantoms, namely, heterogeneous, homogeneous, and bone homogeneous were generated based on the same mouse computed tomography image set. These phantoms were generated by overriding the tissue type of none of the voxels (heterogeneous), all voxels (homogeneous), and only the bone voxels (bone homogeneous) to that of soft tissue. Phase space files of the 100 and 225 kVp photon beams based on a small-animal irradiator (XRad225Cx, Precision X-Ray Inc., North Branford, CT) were generated using BEAMnrc. A 360 deg. photon arc was simulated and three-dimensional (3D) dose calculations were carried out using the DOSXYZnrc code through DOSCTP in the above three phantoms. For comparison, the 3D dose distributions, dose profiles, mean, maximum, and point doses at different locations such as the isocenter, lung, rib, and spine were determined in the three phantoms. Results: The dose gradient resulting from the 225 kVp arc was found to be steeper than for the 100 kVp arc. The mean dose was found to be 1.29 and 1.14 times higher for the heterogeneous phantom when compared to the mean dose in the homogeneous phantom using the 100 and 225 kVp photon arcs, respectively. The bone doses (rib and spine) in the heterogeneous mouse phantom were about five (100 kVp) and three (225 kVp) times higher when compared to the homogeneous phantom. However, the lung dose did not vary significantly between the heterogeneous, homogeneous, and bone homogeneous phantom for the 225 kVp compared to the 100 kVp photon beams. Conclusions: A significant bone dose enhancement was found when the 100 and 225 kVp photon beams were used in small-animal irradiation. This dosimetric effect, due to the presence of the bone heterogeneity, was more significant than that due to the lung heterogeneity. Hence, for kV photon energies of the range used in small-animal irradiation, the increase of the mean and bone dose due to the photoelectric effect could be a dosimetric concern.
18. Backscatter towards the monitor ion chamber in high-energy photon and electron beams: charge integration versus Monte Carlo simulation
Verhaegen, F.; Symonds-Tayler, R.; Liu, H. H.; Nahum, A. E.
2000-11-01
In some linear accelerators, the charge collected by the monitor ion chamber is partly caused by backscattered particles from accelerator components downstream from the chamber. This influences the output of the accelerator and also has to be taken into account when output factors are derived from Monte Carlo simulations. In this work, the contribution of backscattered particles to the monitor ion chamber response of a Varian 2100C linac was determined for photon beams (6, 10 MV) and for electron beams (6, 12, 20 MeV). The experimental procedure consisted of charge integration from the target in a photon beam or from the monitor ion chamber in electron beams. The Monte Carlo code EGS4/BEAM was used to study the contribution of backscattered particles to the dose deposited in the monitor ion chamber. Both measurements and simulations showed a linear increase in backscatter fraction with decreasing field size for photon and electron beams. For 6 MV and 10 MV photon beams, a 2-3% increase in backscatter was obtained for a 0.50.5 cm2 field compared to a 4040 cm2 field. The results for the 6 MV beam were slightly higher than for the 10 MV beam. For electron beams (6, 12, 20 MeV), an increase of similar magnitude was obtained from measurements and simulations for 6 MeV electrons. For higher energy electron beams a smaller increase in backscatter fraction was found. The problem is of less importance for electron beams since large variations of field size for a single electron energy usually do not occur.
19. Backscatter towards the monitor ion chamber in high-energy photon and electron beams: charge integration versus Monte Carlo simulation.
PubMed
Verhaegen, F; Symonds-Tayler, R; Liu, H H; Nahum, A E
2000-11-01
In some linear accelerators, the charge collected by the monitor ion chamber is partly caused by backscattered particles from accelerator components downstream from the chamber. This influences the output of the accelerator and also has to be taken into account when output factors are derived from Monte Carlo simulations. In this work, the contribution of backscattered particles to the monitor ion chamber response of a Varian 2100C linac was determined for photon beams (6, 10 MV) and for electron beams (6, 12, 20 MeV). The experimental procedure consisted of charge integration from the target in a photon beam or from the monitor ion chamber in electron beams. The Monte Carlo code EGS4/BEAM was used to study the contribution of backscattered particles to the dose deposited in the monitor ion chamber. Both measurements and simulations showed a linear increase in backscatter fraction with decreasing field size for photon and electron beams. For 6 MV and 10 MV photon beams, a 2-3% increase in backscatter was obtained for a 0.5 x 0.5 cm2 field compared to a 40 x 40 cm2 field. The results for the 6 MV beam were slightly higher than for the 10 MV beam. For electron beams (6, 12, 20 MeV), an increase of similar magnitude was obtained from measurements and simulations for 6 MeV electrons. For higher energy electron beams a smaller increase in backscatter fraction was found. The problem is of less importance for electron beams since large variations of field size for a single electron energy usually do not occur. PMID:11098896
20. Correlated histogram representation of Monte Carlo derived medical accelerator photon-output phase space
DOEpatents
Schach Von Wittenau, Alexis E.
2003-01-01
A method is provided to represent the calculated phase space of photons emanating from medical accelerators used in photon teletherapy. The method reproduces the energy distributions and trajectories of the photons originating in the bremsstrahlung target and of photons scattered by components within the accelerator head. The method reproduces the energy and directional information from sources up to several centimeters in radial extent, so it is expected to generalize well to accelerators made by different manufacturers. The method is computationally both fast and efficient overall sampling efficiency of 80% or higher for most field sizes. The computational cost is independent of the number of beams used in the treatment plan.
1. Acceleration of Monte Carlo simulation of photon migration in complex heterogeneous media using Intel many-integrated core architecture.
PubMed
Gorshkov, Anton V; Kirillin, Mikhail Yu
2015-08-01
Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing. PMID:26249663
2. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code
SciTech Connect
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Some specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000® problems. These benchmark and scaling studies show promising results.
3. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code
DOE PAGESBeta
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000® problems. These benchmark and scaling studies show promising results.« less
4. Minimizing the cost of splitting in Monte Carlo radiation transport simulation
SciTech Connect
Juzaitis, R.J.
1980-10-01
A deterministic analysis of the computational cost associated with geometric splitting/Russian roulette in Monte Carlo radiation transport calculations is presented. Appropriate integro-differential equations are developed for the first and second moments of the Monte Carlo tally as well as time per particle history, given that splitting with Russian roulette takes place at one (or several) internal surfaces of the geometry. The equations are solved using a standard S/sub n/ (discrete ordinates) solution technique, allowing for the prediction of computer cost (formulated as the product of sample variance and time per particle history, sigma/sup 2//sub s/tau p) associated with a given set of splitting parameters. Optimum splitting surface locations and splitting ratios are determined. Benefits of such an analysis are particularly noteworthy for transport problems in which splitting is apt to be extensively employed (e.g., deep penetration calculations).
5. Capabilities, Implementation, and Benchmarking of Shift, a Massively Parallel Monte Carlo Radiation Transport Code
DOE PAGESBeta
Pandya, Tara M; Johnson, Seth R; Evans, Thomas M; Davidson, Gregory G; Hamilton, Steven P; Godfrey, Andrew T
2016-01-01
This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemorespecific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 R problems. These benchmark and scaling studies show promising results.less
6. A simplified spherical harmonic method for coupled electron-photon transport calculations
SciTech Connect
Josef, J.A.
1996-12-01
In this thesis we have developed a simplified spherical harmonic method (SP{sub N} method) and associated efficient solution techniques for 2-D multigroup electron-photon transport calculations. The SP{sub N} method has never before been applied to charged-particle transport. We have performed a first time Fourier analysis of the source iteration scheme and the P{sub 1} diffusion synthetic acceleration (DSA) scheme applied to the 2-D SP{sub N} equations. Our theoretical analyses indicate that the source iteration and P{sub 1} DSA schemes are as effective for the 2-D SP{sub N} equations as for the 1-D S{sub N} equations. Previous analyses have indicated that the P{sub 1} DSA scheme is unstable (with sufficiently forward-peaked scattering and sufficiently small absorption) for the 2-D S{sub N} equations, yet is very effective for the 1-D S{sub N} equations. In addition, we have applied an angular multigrid acceleration scheme, and computationally demonstrated that it performs as well for the 2-D SP{sub N} equations as for the 1-D S{sub N} equations. It has previously been shown for 1-D S{sub N} calculations that this scheme is much more effective than the DSA scheme when scattering is highly forward-peaked. We have investigated the applicability of the SP{sub N} approximation to two different physical classes of problems: satellite electronics shielding from geomagnetically trapped electrons, and electron beam problems. In the space shielding study, the SP{sub N} method produced solutions that are accurate within 10% of the benchmark Monte Carlo solutions, and often orders of magnitude faster than Monte Carlo. We have successfully modeled quasi-void problems and have obtained excellent agreement with Monte Carlo. We have observed that the SP{sub N} method appears to be too diffusive an approximation for beam problems. This result, however, is in agreement with theoretical expectations.
7. Modular, object-oriented redesign of a large-scale Monte Carlo neutron transport program
SciTech Connect
Moskowitz, B.S.
2000-02-01
This paper describes the modular, object-oriented redesign of a large-scale Monte Carlo neutron transport program. This effort represents a complete 'white sheet of paper' rewrite of the code. In this paper, the motivation driving this project, the design objectives for the new version of the program, and the design choices and their consequences will be discussed. The design itself will also be described, including the important subsystems as well as the key classes within those subsystems.
8. MONTE CARLO PARTICLE TRANSPORT IN MEDIA WITH EXPONENTIALLY VARYING TIME-DEPENDENT CROSS-SECTIONS
SciTech Connect
F. BROWN; W. MARTIN
2001-02-01
A probability density function (PDF) and random sampling procedure for the distance to collision were derived for the case of exponentially varying cross-sections. Numerical testing indicates that both are correct. This new sampling procedure has direct application in a new method for Monte Carlo radiation transport, and may be generally useful for analyzing physical problems where the material cross-sections change very rapidly in an exponential manner.
9. Cavity-photon-switched coherent transient transport in a double quantum waveguide
SciTech Connect
Abdullah, Nzar Rauf Gudmundsson, Vidar; Tang, Chi-Shung; Manolescu, Andrei
2014-12-21
We study a cavity-photon-switched coherent electron transport in a symmetric double quantum waveguide. The waveguide system is weakly connected to two electron reservoirs, but strongly coupled to a single quantized photon cavity mode. A coupling window is placed between the waveguides to allow electron interference or inter-waveguide transport. The transient electron transport in the system is investigated using a quantum master equation. We present a cavity-photon tunable semiconductor quantum waveguide implementation of an inverter quantum gate, in which the output of the waveguide system may be selected via the selection of an appropriate photon number or “photon frequency” of the cavity. In addition, the importance of the photon polarization in the cavity, that is, either parallel or perpendicular to the direction of electron propagation in the waveguide system is demonstrated.
10. Magnetic confinement of electron and photon radiotherapy dose: A Monte Carlo simulation with a nonuniform longitudinal magnetic field
SciTech Connect
Chen Yu; Bielajew, Alex F.; Litzenberg, Dale W.; Moran, Jean M.; Becchetti, Frederick D.
2005-12-15
It recently has been shown experimentally that the focusing provided by a longitudinal nonuniform high magnetic field can significantly improve electron beam dose profiles. This could permit precise targeting of tumors near critical areas and minimize the radiation dose to surrounding healthy tissue. The experimental results together with Monte Carlo simulations suggest that the magnetic confinement of electron radiotherapy beams may provide an alternative to proton or heavy ion radiation therapy in some cases. In the present work, the external magnetic field capability of the Monte Carlo code PENELOPE was utilized by providing a subroutine that modeled the actual field produced by the solenoid magnet used in the experimental studies. The magnetic field in our simulation covered the region from the vacuum exit window to the phantom including surrounding air. In a longitudinal nonuniform magnetic field, it is observed that the electron dose can be focused in both the transverse and longitudinal directions. The measured dose profiles of the electron beam are generally reproduced in the Monte Carlo simulations to within a few percent in the region of interest provided that the geometry and the energy of the incident electron beam are accurately known. Comparisons for the photon beam dose profiles with and without the magnetic field are also made. The experimental results are qualitatively reproduced in the simulation. Our simulation shows that the excessive dose at the beam entrance is due to the magnetic field trapping and focusing scattered secondary electrons that were produced in the air by the incident photon beam. The simulations also show that the electron dose profile can be manipulated by the appropriate control of the beam energy together with the strength and displacement of the longitudinal magnetic field.
11. Boltzmann equation and Monte Carlo studies of electron transport in resistive plate chambers
Bonjakovi?, D.; Petrovi?, Z. Lj; White, R. D.; Dujko, S.
2014-10-01
A multi term theory for solving the Boltzmann equation and Monte Carlo simulation technique are used to investigate electron transport in Resistive Plate Chambers (RPCs) that are used for timing and triggering purposes in many high energy physics experiments at CERN and elsewhere. Using cross sections for electron scattering in C2H2F4, iso-C4H10 and SF6 as an input in our Boltzmann and Monte Carlo codes, we have calculated data for electron transport as a function of reduced electric field E/N in various C2H2F4/iso-C4H10/SF6 gas mixtures used in RPCs in the ALICE, CMS and ATLAS experiments. Emphasis is placed upon the explicit and implicit effects of non-conservative collisions (e.g. electron attachment and/or ionization) on the drift and diffusion. Among many interesting and atypical phenomena induced by the explicit effects of non-conservative collisions, we note the existence of negative differential conductivity (NDC) in the bulk drift velocity component with no indication of any NDC for the flux component in the ALICE timing RPC system. We systematically study the origin and mechanisms for such phenomena as well as the possible physical implications which arise from their explicit inclusion into models of RPCs. Spatially-resolved electron transport properties are calculated using a Monte Carlo simulation technique in order to understand these phenomena.
12. A computationally efficient moment-preserving Monte Carlo electron transport method with implementation in Geant4
Dixon, D. A.; Prinja, A. K.; Franke, B. C.
2015-09-01
This paper presents the theoretical development and numerical demonstration of a moment-preserving Monte Carlo electron transport method. Foremost, a full implementation of the moment-preserving (MP) method within the Geant4 particle simulation toolkit is demonstrated. Beyond implementation details, it is shown that the MP method is a viable alternative to the condensed history (CH) method for inclusion in current and future generation transport codes through demonstration of the key features of the method including: systematically controllable accuracy, computational efficiency, mathematical robustness, and versatility. A wide variety of results common to electron transport are presented illustrating the key features of the MP method. In particular, it is possible to achieve accuracy that is statistically indistinguishable from analog Monte Carlo, while remaining up to three orders of magnitude more efficient than analog Monte Carlo simulations. Finally, it is shown that the MP method can be generalized to any applicable analog scattering DCS model by extending previous work on the MP method beyond analytical DCSs to the partial-wave (PW) elastic tabulated DCS data.
13. Epp - A C++ EGSnrc user code for Monte Carlo simulation of radiation transport
Cui, Congwu; Lippuner, Jonas; Ingleby, Harry R.; Di Valentino, David N. M.; Elbakri, Idris A.
2010-04-01
Easy particle propagation (Epp) is a Monte Carlo simulation EGSnrc user code that we have developed for dose calculation in a voxelized volume, and to generate images of an arbitrary geometry irradiated by a particle source. The dose calculation aspect is a reimplementation of the function of DOSXYZnrc with new features added and some restrictions removed. Epp is designed for x-ray application, but can be readily extended to trace other kinds of particles. Epp is based on the EGSnrc C++ class library (egspp) which makes modeling particle sources and simulation geometries simpler than in DOSXYZnrc and other BEAM user codes based on EGSnrc code system. With Epp geometries can be modeled analytically or voxelized geometries, such as those in DOSXYZnrc, can be used. Compared to DOSXYZnrc (slightly modified from the official version for saving phase space information of photons leaving the geometry), Epp is at least two times faster. Photon propagation to the image plane is integrated into Epp (other particles possible with minor extension to the current code) with an ideal detector defined. When only the resultant images are needed, there is no need to save the particle data. This results in significant savings of data storage space, network load, and time for file I/O. Epp was validated against DOSXYZnrc for imaging and dose calculation by comparing simulation results with the same input. Epp can be used as a Monte Carlo simulation tool for faster imaging and radiation dose applications.
14. Monte Carlo study of the energy and angular dependence of the response of plastic scintillation detectors in photon beams
SciTech Connect
Wang, Lilie L. W.; Klein, David; Beddar, A. Sam
2010-10-15
Purpose: By using Monte Carlo simulations, the authors investigated the energy and angular dependence of the response of plastic scintillation detectors (PSDs) in photon beams. Methods: Three PSDs were modeled in this study: A plastic scintillator (BC-400) and a scintillating fiber (BCF-12), both attached by a plastic-core optical fiber stem, and a plastic scintillator (BC-400) attached by an air-core optical fiber stem with a silica tube coated with silver. The authors then calculated, with low statistical uncertainty, the energy and angular dependences of the PSDs' responses in a water phantom. For energy dependence, the response of the detectors is calculated as the detector dose per unit water dose. The perturbation caused by the optical fiber stem connected to the PSD to guide the optical light to a photodetector was studied in simulations using different optical fiber materials. Results: For the energy dependence of the PSDs in photon beams, the PSDs with plastic-core fiber have excellent energy independence within about 0.5% at photon energies ranging from 300 keV (monoenergetic) to 18 MV (linac beam). The PSD with an air-core optical fiber with a silica tube also has good energy independence within 1% in the same photon energy range. For the angular dependence, the relative response of all the three modeled PSDs is within 2% for all the angles in a 6 MV photon beam. This is also true in a 300 keV monoenergetic photon beam for PSDs with plastic-core fiber. For the PSD with an air-core fiber with a silica tube in the 300 keV beam, the relative response varies within 1% for most of the angles, except in the case when the fiber stem is pointing right to the radiation source in which case the PSD may over-response by more than 10%. Conclusions: At {+-}1% level, no beam energy correction is necessary for the response of all three PSDs modeled in this study in the photon energy ranges from 200 keV (monoenergetic) to 18 MV (linac beam). The PSD would be even closer to water equivalent if there is a silica tube around the sensitive volume. The angular dependence of the response of the three PSDs in a 6 MV photon beam is not of concern at 2% level.
15. Exponentially-convergent Monte Carlo for the 1-D transport equation
SciTech Connect
Peterson, J. R.; Morel, J. E.; Ragusa, J. C.
2013-07-01
We define a new exponentially-convergent Monte Carlo method for solving the one-speed 1-D slab-geometry transport equation. This method is based upon the use of a linear discontinuous finite-element trial space in space and direction to represent the transport solution. A space-direction h-adaptive algorithm is employed to restore exponential convergence after stagnation occurs due to inadequate trial-space resolution. This methods uses jumps in the solution at cell interfaces as an error indicator. Computational results are presented demonstrating the efficacy of the new approach. (authors)
16. Development of A Monte Carlo Radiation Transport Code System For HEDS: Status Update
NASA Technical Reports Server (NTRS)
Townsend, Lawrence W.; Gabriel, Tony A.; Miller, Thomas M.
2003-01-01
Modifications of the Monte Carlo radiation transport code HETC are underway to extend the code to include transport of energetic heavy ions, such as are found in the galactic cosmic ray spectrum in space. The new HETC code will be available for use in radiation shielding applications associated with missions, such as the proposed manned mission to Mars. In this work the current status of code modification is described. Methods used to develop the required nuclear reaction models, including total, elastic and nuclear breakup processes, and their associated databases are also presented. Finally, plans for future work on the extended HETC code system and for its validation are described.
17. GPU-Accelerated Monte Carlo Electron Transport Methods: Development and Application for Radiation Dose Calculations Using Six GPU cards
Su, Lin; Du, Xining; Liu, Tianyu; Xu, X. George
2014-06-01
An electron-photon coupled Monte Carlo code ARCHER - Accelerated Radiation-transport Computations in Heterogeneous EnviRonments - is being developed at Rensselaer Polytechnic Institute as a software testbed for emerging heterogeneous high performance computers that utilize accelerators such as GPUs. This paper presents the preliminary code development and the testing involving radiation dose related problems. In particular, the paper discusses the electron transport simulations using the class-II condensed history method. The considered electron energy ranges from a few hundreds of keV to 30 MeV. For photon part, photoelectric effect, Compton scattering and pair production were modeled. Voxelized geometry was supported. A serial CPU code was first written in C++. The code was then transplanted to the GPU using the CUDA C 5.0 standards. The hardware involved a desktop PC with an Intel Xeon X5660 CPU and six NVIDIA Tesla™ M2090 GPUs. The code was tested for a case of 20 MeV electron beam incident perpendicularly on a water-aluminum-water phantom. The depth and later dose profiles were found to agree with results obtained from well tested MC codes. Using six GPU cards, 6x106 electron histories were simulated within 2 seconds. In comparison, the same case running the EGSnrc and MCNPX codes required 1645 seconds and 9213 seconds, respectively. On-going work continues to test the code for different medical applications such as radiotherapy and brachytherapy.
18. A Monte Carlo tool for combined photon and proton treatment planning verification
Seco, J.; Jiang, H.; Herrup, D.; Kooy, H.; Paganetti, H.
2007-06-01
Photons and protons are usually used independently to treat cancer. However, at MGH patients can be treated with both photons and protons since both modalities are available on site. A combined therapy can be advantageous in cancer therapy due to the skin sparing ability of photons and the sharp Bragg peak fall-off for protons beyond the tumor. In the present work, we demonstrate how to implement a combined 3D MC toolkit for photon and proton (ph-pr) therapy, which can be used for verification of the treatment plan. The commissioning of a MC system for combined ph-pr involves initially the development of a MC model of both the photon and proton treatment heads. The MC dose tool was evaluated on a head and neck patient treated with both combined photon and proton beams. The combined ph-pr dose agreed with measurements in solid water phantom to within 3%mm. Comparison with commercial planning system pencil beam prediction agrees within 3% (except for air cavities and bone regions).
19. Topological Photonic Quasicrystals: Fractal Topological Spectrum and Protected Transport
Bandres, Miguel A.; Rechtsman, Mikael C.; Segev, Mordechai
2016-01-01
We show that it is possible to have a topological phase in two-dimensional quasicrystals without any magnetic field applied, but instead introducing an artificial gauge field via dynamic modulation. This topological quasicrystal exhibits scatter-free unidirectional edge states that are extended along the system's perimeter, contrary to the states of an ordinary quasicrystal system, which are characterized by power-law decay. We find that the spectrum of this Floquet topological quasicrystal exhibits a rich fractal (self-similar) structure of topological "minigaps," manifesting an entirely new phenomenon: fractal topological systems. These topological minigaps form only when the system size is sufficiently large because their gapless edge states penetrate deep into the bulk. Hence, the topological structure emerges as a function of the system size, contrary to periodic systems where the topological phase can be completely characterized by the unit cell. We demonstrate the existence of this topological phase both by using a topological index (Bott index) and by studying the unidirectional transport of the gapless edge states and its robustness in the presence of defects. Our specific model is a Penrose lattice of helical optical waveguides—a photonic Floquet quasicrystal; however, we expect this new topological quasicrystal phase to be universal.
20. A portable, parallel, object-oriented Monte Carlo neutron transport code in C++
SciTech Connect
Lee, S.R.; Cummings, J.C.; Nolen, S.D. |
1997-05-01
We have developed a multi-group Monte Carlo neutron transport code using C++ and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and {alpha}-eigenvalues and is portable to and runs parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities of MC++ are discussed, along with physics and performance results on a variety of hardware, including all Accelerated Strategic Computing Initiative (ASCI) hardware. Current parallel performance indicates the ability to compute {alpha}-eigenvalues in seconds to minutes rather than hours to days. Future plans and the implementation of a general transport physics framework are also discussed.
1. A bone composition model for Monte Carlo x-ray transport simulations
SciTech Connect
Zhou Hu; Keall, Paul J.; Graves, Edward E.
2009-03-15
In the megavoltage energy range although the mass attenuation coefficients of different bones do not vary by more than 10%, it has been estimated that a simple tissue model containing a single-bone composition could cause errors of up to 10% in the calculated dose distribution. In the kilovoltage energy range, the variation in mass attenuation coefficients of the bones is several times greater, and the expected error from applying this type of model could be as high as several hundred percent. Based on the observation that the calcium and phosphorus compositions of bones are strongly correlated with the bone density, the authors propose an analytical formulation of bone composition for Monte Carlo computations. Elemental compositions and densities of homogeneous adult human bones from the literature were used as references, from which the calcium and phosphorus compositions were fitted as polynomial functions of bone density and assigned to model bones together with the averaged compositions of other elements. To test this model using the Monte Carlo package DOSXYZnrc, a series of discrete model bones was generated from this formula and the radiation-tissue interaction cross-section data were calculated. The total energy released per unit mass of primary photons (terma) and Monte Carlo calculations performed using this model and the single-bone model were compared, which demonstrated that at kilovoltage energies the discrepancy could be more than 100% in bony dose and 30% in soft tissue dose. Percentage terma computed with the model agrees with that calculated on the published compositions to within 2.2% for kV spectra and 1.5% for MV spectra studied. This new bone model for Monte Carlo dose calculation may be of particular importance for dosimetry of kilovoltage radiation beams as well as for dosimetry of pediatric or animal subjects whose bone composition may differ substantially from that of adult human bones.
2. A bone composition model for Monte Carlo x-ray transport simulations.
PubMed
Zhou, Hu; Keall, Paul J; Graves, Edward E
2009-03-01
In the megavoltage energy range although the mass attenuation coefficients of different bones do not vary by more than 10%, it has been estimated that a simple tissue model containing a single-bone composition could cause errors of up to 10% in the calculated dose distribution. In the kilovoltage energy range, the variation in mass attenuation coefficients of the bones is several times greater, and the expected error from applying this type of model could be as high as several hundred percent. Based on the observation that the calcium and phosphorus compositions of bones are strongly correlated with the bone density, the authors propose an analytical formulation of bone composition for Monte Carlo computations. Elemental compositions and densities of homogeneous adult human bones from the literature were used as references, from which the calcium and phosphorus compositions were fitted as polynomial functions of bone density and assigned to model bones together with the averaged compositions of other elements. To test this model using the Monte Carlo package DOSXYZnrc, a series of discrete model bones was generated from this formula and the radiation-tissue interaction cross-section data were calculated. The total energy released per unit mass of primary photons (terma) and Monte Carlo calculations performed using this model and the single-bone model were compared, which demonstrated that at kilovoltage energies the discrepancy could be more than 100% in bony dose and 30% in soft tissue dose. Percentage terma computed with the model agrees with that calculated on the published compositions to within 2.2% for kV spectra and 1.5% for MV spectra studied. This new bone model for Monte Carlo dose calculation may be of particular importance for dosimetry of kilovoltage radiation beams as well as for dosimetry of pediatric or animal subjects whose bone composition may differ substantially from that of adult human bones. PMID:19378761
3. High-speed evaluation of track-structure Monte Carlo electron transport simulations.
PubMed
Pasciak, A S; Ford, J R
2008-10-01
There are many instances where Monte Carlo simulation using the track-structure method for electron transport is necessary for the accurate analytical computation and estimation of dose and other tally data. Because of the large electron interaction cross-sections and highly anisotropic scattering behavior, the track-structure method requires an enormous amount of computation time. For microdosimetry, radiation biology and other applications involving small site and tally sizes, low electron energies or high-Z/low-Z material interfaces where the track-structure method is preferred, a computational device called a field-programmable gate array (FPGA) is capable of executing track-structure Monte Carlo electron-transport simulations as fast as or faster than a standard computer can complete an identical simulation using the condensed history (CH) technique. In this paper, data from FPGA-based track-structure electron-transport computations are presented for five test cases, from simple slab-style geometries to radiation biology applications involving electrons incident on endosteal bone surface cells. For the most complex test case presented, an FPGA is capable of evaluating track-structure electron-transport problems more than 500 times faster than a standard computer can perform the same track-structure simulation and with comparable accuracy. PMID:18780958
4. Ion beam transport in tissue-like media using the Monte Carlo code SHIELD-HIT.
PubMed
Gudowska, Irena; Sobolevsky, Nikolai; Andreo, Pedro; Belki?, Dzevad; Brahme, Anders
2004-05-21
The development of the Monte Carlo code SHIELD-HIT (heavy ion transport) for the simulation of the transport of protons and heavier ions in tissue-like media is described. The code SHIELD-HIT, a spin-off of SHIELD (available as RSICC CCC-667), extends the transport of hadron cascades from standard targets to that of ions in arbitrary tissue-like materials, taking into account ionization energy-loss straggling and multiple Coulomb scattering effects. The consistency of the results obtained with SHIELD-HIT has been verified against experimental data and other existing Monte Carlo codes (PTRAN, PETRA), as well as with deterministic models for ion transport, comparing depth distributions of energy deposition by protons, 12C and 20Ne ions impinging on water. The SHIELD-HIT code yields distributions consistent with a proper treatment of nuclear inelastic collisions. Energy depositions up to and well beyond the Bragg peak due to nuclear fragmentations are well predicted. Satisfactory agreement is also found with experimental determinations of the number of fragments of a given type, as a function of depth in water, produced by 12C and 14N ions of 670 MeV u(-1), although less favourable agreement is observed for heavier projectiles such as 16O ions of the same energy. The calculated neutron spectra differential in energy and angle produced in a mimic of a Martian rock by irradiation with 12C ions of 290 MeV u(-1) also shows good agreement with experimental data. It is concluded that a careful analysis of stopping power data for different tissues is necessary for radiation therapy applications, since an incorrect estimation of the position of the Bragg peak might lead to a significant deviation from the prescribed dose in small target volumes. The results presented in this study indicate the usefulness of the SHIELD-HIT code for Monte Carlo simulations in the field of light ion radiation therapy. PMID:15214534
5. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2016-03-01
This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package authored at Oak Ridge National Laboratory. Shift has been developed to scale well from laptops to small computing clusters to advanced supercomputers and includes features such as support for multiple geometry and physics engines, hybrid capabilities for variance reduction methods such as the Consistent Adjoint-Driven Importance Sampling methodology, advanced parallel decompositions, and tally methods optimized for scalability on supercomputing architectures. The scaling studies presented in this paper demonstrate good weak and strong scaling behavior for the implemented algorithms. Shift has also been validated and verified against various reactor physics benchmarks, including the Consortium for Advanced Simulation of Light Water Reactors' Virtual Environment for Reactor Analysis criticality test suite and several Westinghouse AP1000® problems presented in this paper. These benchmark results compare well to those from other contemporary Monte Carlo codes such as MCNP5 and KENO.
6. Surface dose reduction from bone interface in kilovoltage X-ray radiation therapy: a Monte Carlo study of photon spectra.
PubMed
Chow, James C L; Owrangi, Amir M
2012-01-01
This study evaluated the dosimetric impact of surface dose reduction due to the loss of backscatter from the bone interface in kilovoltage (kV) X-ray radiation therapy. Monte Carlo simulation was carried out using the EGSnrc code. An inhomogeneous phantom containing a thin water layer (0.5-5 mm) on top of a bone (thickness = 1 cm) was irradiated by a clinical 105 kVp photon beam produced by a Gulmay D3225 X-ray machine. Field sizes of 2, 5, and 10 cm diameter and source-to-surface distance of 20 cm were used. Surface doses for different phantom configurations were calculated using the DOSXYZnrc code. Photon energy spectra at the phantom surface and bone were determined according to the phase-space files at the particle scoring planes which included the multiple crossers. For comparison, all Monte Carlo simulations were repeated in a phantom with the bone replaced by water. Surface dose reduction was found when a bone was underneath the water layer. When the water thickness was equal to 1 mm for the circular field of 5 cm diameter, a surface dose reduction of 6.3% was found. The dose reduction decreased to 4.7% and 3.4% when the water thickness increased to 3 and 5 mm, respectively. This shows that the impact of the surface dose uncertainty decreased while the water thickness over the bone increased. This result was supported by the decrease in relative intensity of the lower energy photons in the energy spectrum when the water layer was with and over the bone, compared to without the bone. We concluded that surface dose reduction of 7.8%-1.1% was found when the water thickness increased from 0.5-5 mm for circular fields with diameters ranging from 2-10 cm. This decrease of surface dose results in an overestimation of prescribed dose at the patient's surface, and might be a concern when using kV photon beam to treat skin tumors in sites such as forehead, chest wall, and kneecap. PMID:22955657
7. Monte Carlo simulation of the operational quantities at the realistic mixed neutron-photon radiation fields CANEL and SIGMA.
PubMed
Lacoste, V; Gressier, V
2007-01-01
The Institute for Radiological Protection and Nuclear Safety owns two facilities producing realistic mixed neutron-photon radiation fields, CANEL, an accelerator driven moderator modular device, and SIGMA, a graphite moderated americium-beryllium assembly. These fields are representative of some of those encountered at nuclear workplaces, and the corresponding facilities are designed and used for calibration of various instruments, such as survey meters, personal dosimeters or spectrometric devices. In the framework of the European project EVIDOS, irradiations of personal dosimeters were performed at CANEL and SIGMA. Monte Carlo calculations were performed to estimate the reference values of the personal dose equivalent at both facilities. The Hp(10) values were calculated for three different angular positions, 0 degrees, 45 degrees and 75 degrees, of an ICRU phantom located at the position of irradiation. PMID:17578872
8. On Monte Carlo modeling of megavoltage photon beams: A revisited study on the sensitivity of beam parameters
SciTech Connect
Chibani, Omar; Moftah, Belal; Ma, C.-M. Charlie
2011-01-15
Purpose: To commission Monte Carlo beam models for five Varian megavoltage photon beams (4, 6, 10, 15, and 18 MV). The goal is to closely match measured dose distributions in water for a wide range of field sizes (from 2x2 to 35x35 cm{sup 2}). The second objective is to reinvestigate the sensitivity of the calculated dose distributions to variations in the primary electron beam parameters. Methods: The GEPTS Monte Carlo code is used for photon beam simulations and dose calculations. The linear accelerator geometric models are based on (i) manufacturer specifications, (ii) corrections made by Chibani and Ma [''On the discrepancies between Monte Carlo dose calculations and measurements for the 18 MV Varian photon beam,'' Med. Phys. 34, 1206-1216 (2007)], and (iii) more recent drawings. Measurements were performed using pinpoint and Farmer ionization chambers, depending on the field size. Phase space calculations for small fields were performed with and without angle-based photon splitting. In addition to the three commonly used primary electron beam parameters (E{sub AV} is the mean energy, FWHM is the energy spectrum broadening, and R is the beam radius), the angular divergence ({theta}) of primary electrons is also considered. Results: The calculated and measured dose distributions agreed to within 1% local difference at any depth beyond 1 cm for different energies and for field sizes varying from 2x2 to 35x35 cm{sup 2}. In the penumbra regions, the distance to agreement is better than 0.5 mm, except for 15 MV (0.4-1 mm). The measured and calculated output factors agreed to within 1.2%. The 6, 10, and 18 MV beam models use {theta}=0 deg., while the 4 and 15 MV beam models require {theta}=0.5 deg. and 0.6 deg., respectively. The parameter sensitivity study shows that varying the beam parameters around the solution can lead to 5% differences with measurements for small (e.g., 2x2 cm{sup 2}) and large (e.g., 35x35 cm{sup 2}) fields, while a perfect agreement is maintained for the 10x10 cm{sup 2} field. The influence of R on the central-axis depth dose and the strong influence of {theta} on the lateral dose profiles are demonstrated. Conclusions: Dose distributions for very small and very large fields were proved to be more sensitive to variations in E{sub AV}, R, and {theta} in comparison with the 10x10 cm{sup 2} field. Monte Carlo beam models need to be validated for a wide range of field sizes including small field sizes (e.g., 2x2 cm{sup 2}).
9. SAF values for internal photon emitters calculated for the RPI-P pregnant-female models using Monte Carlo methods
SciTech Connect
Shi, C. Y.; Xu, X. George; Stabin, Michael G.
2008-07-15
Estimates of radiation absorbed doses from radionuclides internally deposited in a pregnant woman and her fetus are very important due to elevated fetal radiosensitivity. This paper reports a set of specific absorbed fractions (SAFs) for use with the dosimetry schema developed by the Society of Nuclear Medicine's Medical Internal Radiation Dose (MIRD) Committee. The calculations were based on three newly constructed pregnant female anatomic models, called RPI-P3, RPI-P6, and RPI-P9, that represent adult females at 3-, 6-, and 9-month gestational periods, respectively. Advanced Boundary REPresentation (BREP) surface-geometry modeling methods were used to create anatomically realistic geometries and organ volumes that were carefully adjusted to agree with the latest ICRP reference values. A Monte Carlo user code, EGS4-VLSI, was used to simulate internal photon emitters ranging from 10 keV to 4 MeV. SAF values were calculated and compared with previous data derived from stylized models of simplified geometries and with a model of a 7.5-month pregnant female developed previously from partial-body CT images. The results show considerable differences between these models for low energy photons, but generally good agreement at higher energies. These differences are caused mainly by different organ shapes and positions. Other factors, such as the organ mass, the source-to-target-organ centroid distance, and the Monte Carlo code used in each study, played lesser roles in the observed differences in these. Since the SAF values reported in this study are based on models that are anatomically more realistic than previous models, these data are recommended for future applications as standard reference values in internal dosimetry involving pregnant females.
10. A 3D photon superposition/convolution algorithm and its foundation on results of Monte Carlo calculations.
PubMed
Ulmer, W; Pyyry, J; Kaissl, W
2005-04-21
Based on previous publications on a triple Gaussian analytical pencil beam model and on Monte Carlo calculations using Monte Carlo codes GEANT-Fluka, versions 95, 98, 2002, and BEAMnrc/EGSnrc, a three-dimensional (3D) superposition/convolution algorithm for photon beams (6 MV, 18 MV) is presented. Tissue heterogeneity is taken into account by electron density information of CT images. A clinical beam consists of a superposition of divergent pencil beams. A slab-geometry was used as a phantom model to test computed results by measurements. An essential result is the existence of further dose build-up and build-down effects in the domain of density discontinuities. These effects have increasing magnitude for field sizes < or =5.5 cm(2) and densities < or = 0.25 g cm(-3), in particular with regard to field sizes considered in stereotaxy. They could be confirmed by measurements (mean standard deviation 2%). A practical impact is the dose distribution at transitions from bone to soft tissue, lung or cavities. PMID:15815095
11. A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX.
PubMed
Jabbari, Keyvan; Seuntjens, Jan
2014-07-01
An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft tissue). This code can transport protons in wide range of energies (up to 200 MeV for proton). The validity of the fast Monte Carlo (MC) code is evaluated with data MCNPX as a reference code. While analytical pencil beam algorithm transport shows great errors (up to 10%) near small high density heterogeneities, there was less than 2% deviation of MCNPX results in our dose calculation and isodose distribution. In terms of speed, the code runs 200 times faster than MCNPX. In the Fast MC code which is developed in this work, it takes the system less than 2 minutes to calculate dose for 10(6) particles in an Intel Core 2 Duo 2.66 GHZ desktop computer. PMID:25190994
12. MONTE CARLO SIMULATION MODEL OF ENERGETIC PROTON TRANSPORT THROUGH SELF-GENERATED ALFVEN WAVES
SciTech Connect
Afanasiev, A.; Vainio, R.
2013-08-15
A new Monte Carlo simulation model for the transport of energetic protons through self-generated Alfven waves is presented. The key point of the model is that, unlike the previous ones, it employs the full form (i.e., includes the dependence on the pitch-angle cosine) of the resonance condition governing the scattering of particles off Alfven waves-the process that approximates the wave-particle interactions in the framework of quasilinear theory. This allows us to model the wave-particle interactions in weak turbulence more adequately, in particular, to implement anisotropic particle scattering instead of isotropic scattering, which the previous Monte Carlo models were based on. The developed model is applied to study the transport of flare-accelerated protons in an open magnetic flux tube. Simulation results for the transport of monoenergetic protons through the spectrum of Alfven waves reveal that the anisotropic scattering leads to spatially more distributed wave growth than isotropic scattering. This result can have important implications for diffusive shock acceleration, e.g., affect the scattering mean free path of the accelerated particles in and the size of the foreshock region.
13. A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX
PubMed Central
Jabbari, Keyvan; Seuntjens, Jan
2014-01-01
An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft tissue). This code can transport protons in wide range of energies (up to 200 MeV for proton). The validity of the fast Monte Carlo (MC) code is evaluated with data MCNPX as a reference code. While analytical pencil beam algorithm transport shows great errors (up to 10%) near small high density heterogeneities, there was less than 2% deviation of MCNPX results in our dose calculation and isodose distribution. In terms of speed, the code runs 200 times faster than MCNPX. In the Fast MC code which is developed in this work, it takes the system less than 2 minutes to calculate dose for 106 particles in an Intel Core 2 Duo 2.66 GHZ desktop computer. PMID:25190994
14. Analysis of atmospheric gamma-ray flashes detected in near space with allowance for the transport of photons in the atmosphere
Babich, L. P.; Donskoy, E. N.; Kutsyk, I. M.
2008-07-01
Monte Carlo simulations of transport of the bremsstrahlung produced by relativistic runaway electron avalanches are performed for altitudes up to the orbit altitudes where terrestrial gamma-ray flashes (TGFs) have been detected aboard satellites. The photon flux per runaway electron and angular distribution of photons on a hemisphere of radius similar to that of the satellite orbits are calculated as functions of the source altitude z. The calculations yield general results, which are recommended for use in TGF data analysis. The altitude z and polar angle are determined for which the calculated bremsstrahlung spectra and mean photon energies agree with TGF measurements. The correlation of TGFs with variations of the vertical dipole moment of a thundercloud is analyzed. We show that, in agreement with observations, the detected TGFs can be produced in the fields of thunderclouds with charges much smaller than 100 C and that TGFs are not necessarily correlated with the occurrence of blue jets and red sprites.
15. Analysis of atmospheric gamma-ray flashes detected in near space with allowance for the transport of photons in the atmosphere
SciTech Connect
Babich, L. P. Donskoy, E. N.; Kutsyk, I. M.
2008-07-15
Monte Carlo simulations of transport of the bremsstrahlung produced by relativistic runaway electron avalanches are performed for altitudes up to the orbit altitudes where terrestrial gamma-ray flashes (TGFs) have been detected aboard satellites. The photon flux per runaway electron and angular distribution of photons on a hemisphere of radius similar to that of the satellite orbits are calculated as functions of the source altitude z. The calculations yield general results, which are recommended for use in TGF data analysis. The altitude z and polar angle are determined for which the calculated bremsstrahlung spectra and mean photon energies agree with TGF measurements. The correlation of TGFs with variations of the vertical dipole moment of a thundercloud is analyzed. We show that, in agreement with observations, the detected TGFs can be produced in the fields of thunderclouds with charges much smaller than 100 C and that TGFs are not necessarily correlated with the occurrence of blue jets and red sprites.
16. New nuclear data for high-energy all-particle Monte Carlo transport
SciTech Connect
Cox, L.J.; Chadwick, M.B.; Resler, D.A.
1994-06-01
We are extending the LLNL nuclear data libraries to 250 MeV for neutron and proton interaction with biologically important nuclei; i.e. H, C, N, 0, F, P, and Ca. Because of the large number of reaction channels that open with increasing energies, the data is generated in particle production cross section format with energy-angle correlated distributions for the outgoing particles in the laboratory frame of reference. The new Production Cross Section data Library (PCSL) will be used in PEREGRINE -- the new all-particle Monte Carlo transport code being developed at LLNL for dose calculation in radiation therapy planning.
17. Hybrid Parallel Programming Models for AMR Neutron Monte-Carlo Transport
Dureau, David; Potte, Gal
2014-06-01
This paper deals with High Performance Computing (HPC) applied to neutron transport theory on complex geometries, thanks to both an Adaptive Mesh Refinement (AMR) algorithm and a Monte-Carlo (MC) solver. Several Parallelism models are presented and analyzed in this context, among them shared memory and distributed memory ones such as Domain Replication and Domain Decomposition, together with Hybrid strategies. The study is illustrated by weak and strong scalability tests on complex benchmarks on several thousands of cores thanks to the petaflopic supercomputer Tera100.
18. Monte Carlo evaluation of electron transport in heterojunction bipolar transistor base structures
Maziar, C. M.; Klausmeier-Brown, M. E.; Bandyopadhyay, S.; Lundstrom, M. S.; Datta, S.
1986-07-01
Electron transport through base structures of Al(x)Ga(1-x)As heterojunction bipolar transistors is evaluated by Monte Carlo simulation. Simulation results demonstrate the effectiveness of both ballistic launching ramps and graded bases for reducing base transit time. Both techniques are limited, however, in their ability to maintain short transit times across the wide bases that are desirable for reduction of base resistance. Simulation results demonstrate that neither technique is capable of maintaining a 1-ps transit time across a 0.25-micron base. The physical mechanisms responsible for limiting the performance of each structure are identified and a promising hybrid structure is described.
19. Evaluation of PENFAST--a fast Monte Carlo code for dose calculations in photon and electron radiotherapy treatment planning.
PubMed
Habib, B; Poumarede, B; Tola, F; Barthe, J
2010-01-01
The aim of the present study is to demonstrate the potential of accelerated dose calculations, using the fast Monte Carlo (MC) code referred to as PENFAST, rather than the conventional MC code PENELOPE, without losing accuracy in the computed dose. For this purpose, experimental measurements of dose distributions in homogeneous and inhomogeneous phantoms were compared with simulated results using both PENELOPE and PENFAST. The simulations and experiments were performed using a Saturne 43 linac operated at 12 MV (photons), and at 18 MeV (electrons). Pre-calculated phase space files (PSFs) were used as input data to both the PENELOPE and PENFAST dose simulations. Since depth-dose and dose profile comparisons between simulations and measurements in water were found to be in good agreement (within +/-1% to 1 mm), the PSF calculation is considered to have been validated. In addition, measured dose distributions were compared to simulated results in a set of clinically relevant, inhomogeneous phantoms, consisting of lung and bone heterogeneities in a water tank. In general, the PENFAST results agree to within a 1% to 1 mm difference with those produced by PENELOPE, and to within a 2% to 2 mm difference with measured values. Our study thus provides a pre-clinical validation of the PENFAST code. It also demonstrates that PENFAST provides accurate results for both photon and electron beams, equivalent to those obtained with PENELOPE. CPU time comparisons between both MC codes show that PENFAST is generally about 9-21 times faster than PENELOPE. PMID:19342258
20. Markov chain Monte Carlo methods for statistical analysis of RF photonic devices.
PubMed
Piels, Molly; Zibar, Darko
2016-02-01
The microwave reflection coefficient is commonly used to characterize the impedance of high-speed optoelectronic devices. Error and uncertainty in equivalent circuit parameters measured using this data are systematically evaluated. The commonly used nonlinear least-squares method for estimating uncertainty is shown to give unsatisfactory and incorrect results due to the nonlinear relationship between the circuit parameters and the measured data. Markov chain Monte Carlo methods are shown to provide superior results, both for individual devices and for assessing within-die variation. PMID:26906783
1. 3D imaging using combined neutron-photon fan-beam tomography: A Monte Carlo study.
PubMed
Hartman, J; Yazdanpanah, A Pour; Barzilov, A; Regentova, E
2016-05-01
The application of combined neutron-photon tomography for 3D imaging is examined using MCNP5 simulations for objects of simple shapes and different materials. Two-dimensional transmission projections were simulated for fan-beam scans using 2.5MeV deuterium-deuterium and 14MeV deuterium-tritium neutron sources, and high-energy X-ray sources, such as 1MeV, 6MeV and 9MeV. Photons enable assessment of electron density and related mass density, neutrons aid in estimating the product of density and material-specific microscopic cross section- the ratio between the two provides the composition, while CT allows shape evaluation. Using a developed imaging technique, objects and their material compositions have been visualized. PMID:26953978
2. Coupling 3D Monte Carlo light transport in optically heterogeneous tissues to photoacoustic signal generation.
PubMed
Jacques, Steven L
2014-12-01
The generation of photoacoustic signals for imaging objects embedded within tissues is dependent on how well light can penetrate to and deposit energy within an optically absorbing object, such as a blood vessel. This report couples a 3D Monte Carlo simulation of light transport to stress wave generation to predict the acoustic signals received by a detector at the tissue surface. The Monte Carlo simulation allows modeling of optically heterogeneous tissues, and a simple MATLAB acoustic algorithm predicts signals reaching a surface detector. An example simulation considers a skin with a pigmented epidermis, a dermis with a background blood perfusion, and a 500-?m-dia. blood vessel centered at a 1-mm depth in the skin. The simulation yields acoustic signals received by a surface detector, which are generated by a pulsed 532-nm laser exposure before and after inserting the blood vessel. A MATLAB version of the acoustic algorithm and a link to the 3D Monte Carlo website are provided. PMID:25426426
3. A direction-selective flattening filter for clinical photon beams. Monte Carlo evaluation of a new concept
Chofor, Ndimofor; Harder, Dietrich; Willborn, Kay; Rhmann, Antje; Poppe, Bjrn
2011-07-01
A new concept for the design of flattening filters applied in the generation of 6 and 15 MV photon beams by clinical linear accelerators is evaluated by Monte Carlo simulation. The beam head of the Siemens Primus accelerator has been taken as the starting point for the study of the conceived beam head modifications. The direction-selective filter (DSF) system developed in this work is midway between the classical flattening filter (FF) by which homogeneous transversal dose profiles have been established, and the flattening filter-free (FFF) design, by which advantages such as increased dose rate and reduced production of leakage photons and photoneutrons per Gy in the irradiated region have been achieved, whereas dose profile flatness was abandoned. The DSF concept is based on the selective attenuation of bremsstrahlung photons depending on their direction of emission from the bremsstrahlung target, accomplished by means of newly designed small conical filters arranged close to the target. This results in the capture of large-angle scattered Compton photons from the filter in the primary collimator. Beam flatness has been obtained up to any field cross section which does not exceed a circle of 15 cm diameter at 100 cm focal distance, such as 10 10 cm2, 4 14.5 cm2 or less. This flatness offers simplicity of dosimetric verifications, online controls and plausibility estimates of the dose to the target volume. The concept can be utilized when the application of small- and medium-sized homogeneous fields is sufficient, e.g. in the treatment of prostate, brain, salivary gland, larynx and pharynx as well as pediatric tumors and for cranial or extracranial stereotactic treatments. Significant dose rate enhancement has been achieved compared with the FF system, with enhancement factors 1.67 (DSF) and 2.08 (FFF) for 6 MV, and 2.54 (DSF) and 3.96 (FFF) for 15 MV. Shortening the delivery time per fraction matters with regard to workflow in a radiotherapy department, patient comfort, reduction of errors due to patient movement and a slight, probably just noticable improvement of the treatment outcome due to radiobiological reasons. In comparison with the FF system, the number of head leakage photons per Gy in the irradiated region has been reduced at 15 MV by factors 1/2.54 (DSF) and 1/3.96 (FFF), and the source strength of photoneutrons was reduced by factors 1/2.81 (DSF) and 1/3.49 (FFF).
4. A deterministic electron, photon, proton and heavy ion transport suite for the study of the Jovian moon Europa
Badavi, Francis F.; Blattnig, Steve R.; Atwell, William; Nealy, John E.; Norman, Ryan B.
2011-02-01
A Langley research center (LaRC) developed deterministic suite of radiation transport codes describing the propagation of electron, photon, proton and heavy ion in condensed media is used to simulate the exposure from the spectral distribution of the aforementioned particles in the Jovian radiation environment. Based on the measurements by the Galileo probe (1995-2003) heavy ion counter (HIC), the choice of trapped heavy ions is limited to carbon, oxygen and sulfur (COS). The deterministic particle transport suite consists of a coupled electron photon algorithm (CEPTRN) and a coupled light heavy ion algorithm (HZETRN). The primary purpose for the development of the transport suite is to provide a means to the spacecraft design community to rapidly perform numerous repetitive calculations essential for electron, photon, proton and heavy ion exposure assessment in a complex space structure. In this paper, the reference radiation environment of the Galilean satellite Europa is used as a representative boundary condition to show the capabilities of the transport suite. While the transport suite can directly access the output electron and proton spectra of the Jovian environment as generated by the jet propulsion laboratory (JPL) Galileo interim radiation electron (GIRE) model of 2003; for the sake of relevance to the upcoming Europa Jupiter system mission (EJSM), the JPL provided Europa mission fluence spectrum, is used to produce the corresponding depth dose curve in silicon behind a default aluminum shield of 100 mils (0.7 g/cm2). The transport suite can also accept a geometry describing ray traced thickness file from a computer aided design (CAD) package and calculate the total ionizing dose (TID) at a specific target point within the interior of the vehicle. In that regard, using a low fidelity CAD model of the Galileo probe generated by the authors, the transport suite was verified versus Monte Carlo (MC) simulation for orbits JOI-J35 of the Galileo probe extended mission. For the upcoming EJSM mission with an expected launch date of 2020, the transport suite is used to compute the depth dose profile for the traditional aluminum silicon as a standard shield target combination, as well as simulating the shielding response of a high charge number (Z) material such as tantalum (Ta). Finally, a shield optimization algorithm is discussed which can guide the instrument designers and fabrication personnel with the choice of graded-Z shield selection and analysis.
5. Single photon transport along a one-dimensional waveguide with a side manipulated cavity QED system.
PubMed
Yan, Cong-Hua; Wei, Lian-Fu
2015-04-20
An external mirror coupling to a cavity with a two-level atom inside is put forward to control the photon transport along a one-dimensional waveguide. Using a full quantum theory of photon transport in real space, it is shown that the Rabi splittings of the photonic transmission spectra can be controlled by the cavity-mirror couplings; the splittings could still be observed even when the cavity-atom system works in the weak coupling regime, and the transmission probability of the resonant photon can be modulated from 0 to 100%. Additionally, our numerical results show that the appearance of Fano resonance is related to the strengths of the cavity-mirror coupling and the dissipations of the system. An experimental demonstration of the proposal with the current photonic crystal waveguide technique is suggested. PMID:25969078
6. Output correction factors for nine small field detectors in 6 MV radiation therapy photon beams: A PENELOPE Monte Carlo study
SciTech Connect
Benmakhlouf, Hamza; Sempau, Josep; Andreo, Pedro
2014-04-15
Purpose: To determine detector-specific output correction factors,k{sub Q} {sub c{sub l{sub i{sub n}}}} {sub ,Q} {sub m{sub s{sub r}}} {sup f{sub {sup {sub c}{sub l}{sub i}{sub n}{sub {sup ,f{sub {sup {sub m}{sub s}{sub r}{sub ,}}}}}}}} in 6 MV small photon beams for air and liquid ionization chambers, silicon diodes, and diamond detectors from two manufacturers. Methods: Field output factors, defined according to the international formalism published byAlfonso et al. [Med. Phys. 35, 5179–5186 (2008)], relate the dosimetry of small photon beams to that of the machine-specific reference field; they include a correction to measured ratios of detector readings, conventionally used as output factors in broad beams. Output correction factors were calculated with the PENELOPE Monte Carlo (MC) system with a statistical uncertainty (type-A) of 0.15% or lower. The geometries of the detectors were coded using blueprints provided by the manufacturers, and phase-space files for field sizes between 0.5 × 0.5 cm{sup 2} and 10 × 10 cm{sup 2} from a Varian Clinac iX 6 MV linac used as sources. The output correction factors were determined scoring the absorbed dose within a detector and to a small water volume in the absence of the detector, both at a depth of 10 cm, for each small field and for the reference beam of 10 × 10 cm{sup 2}. Results: The Monte Carlo calculated output correction factors for the liquid ionization chamber and the diamond detector were within about ±1% of unity even for the smallest field sizes. Corrections were found to be significant for small air ionization chambers due to their cavity dimensions, as expected. The correction factors for silicon diodes varied with the detector type (shielded or unshielded), confirming the findings by other authors; different corrections for the detectors from the two manufacturers were obtained. The differences in the calculated factors for the various detectors were analyzed thoroughly and whenever possible the results were compared to published data, often calculated for different accelerators and using the EGSnrc MC system. The differences were used to estimate a type-B uncertainty for the correction factors. Together with the type-A uncertainty from the Monte Carlo calculations, an estimation of the combined standard uncertainty was made, assigned to the mean correction factors from various estimates. Conclusions: The present work provides a consistent and specific set of data for the output correction factors of a broad set of detectors in a Varian Clinac iX 6 MV accelerator and contributes to improving the understanding of the physics of small photon beams. The correction factors cannot in general be neglected for any detector and, as expected, their magnitude increases with decreasing field size. Due to the reduced number of clinical accelerator types currently available, it is suggested that detector output correction factors be given specifically for linac models and field sizes, rather than for a beam quality specifier that necessarily varies with the accelerator type and field size due to the different electron spot dimensions and photon collimation systems used by each accelerator model.
7. Unified single-photon and single-electron counting statistics: From cavity QED to electron transport
SciTech Connect
Lambert, Neill; Chen, Yueh-Nan; Nori, Franco
2010-12-15
A key ingredient of cavity QED is the coupling between the discrete energy levels of an atom and photons in a single-mode cavity. The addition of periodic ultrashort laser pulses allows one to use such a system as a source of single photons--a vital ingredient in quantum information and optical computing schemes. Here we analyze and time-adjust the photon-counting statistics of such a single-photon source and show that the photon statistics can be described by a simple transport-like nonequilibrium model. We then show that there is a one-to-one correspondence of this model to that of nonequilibrium transport of electrons through a double quantum dot nanostructure, unifying the fields of photon-counting statistics and electron-transport statistics. This correspondence empowers us to adapt several tools previously used for detecting quantum behavior in electron-transport systems (e.g., super-Poissonian shot noise and an extension of the Leggett-Garg inequality) to single-photon-source experiments.
8. A Deterministic Electron, Photon, Proton and Heavy Ion Radiation Transport Suite for the Study of the Jovian System
NASA Technical Reports Server (NTRS)
Norman, Ryan B.; Badavi, Francis F.; Blattnig, Steve R.; Atwell, William
2011-01-01
A deterministic suite of radiation transport codes, developed at NASA Langley Research Center (LaRC), which describe the transport of electrons, photons, protons, and heavy ions in condensed media is used to simulate exposures from spectral distributions typical of electrons, protons and carbon-oxygen-sulfur (C-O-S) trapped heavy ions in the Jovian radiation environment. The particle transport suite consists of a coupled electron and photon deterministic transport algorithm (CEPTRN) and a coupled light particle and heavy ion deterministic transport algorithm (HZETRN). The primary purpose for the development of the transport suite is to provide a means for the spacecraft design community to rapidly perform numerous repetitive calculations essential for electron, proton and heavy ion radiation exposure assessments in complex space structures. In this paper, the radiation environment of the Galilean satellite Europa is used as a representative boundary condition to show the capabilities of the transport suite. While the transport suite can directly access the output electron spectra of the Jovian environment as generated by the Jet Propulsion Laboratory (JPL) Galileo Interim Radiation Electron (GIRE) model of 2003; for the sake of relevance to the upcoming Europa Jupiter System Mission (EJSM), the 105 days at Europa mission fluence energy spectra provided by JPL is used to produce the corresponding dose-depth curve in silicon behind an aluminum shield of 100 mils ( 0.7 g/sq cm). The transport suite can also accept ray-traced thickness files from a computer-aided design (CAD) package and calculate the total ionizing dose (TID) at a specific target point. In that regard, using a low-fidelity CAD model of the Galileo probe, the transport suite was verified by comparing with Monte Carlo (MC) simulations for orbits JOI--J35 of the Galileo extended mission (1996-2001). For the upcoming EJSM mission with a potential launch date of 2020, the transport suite is used to compute the traditional aluminum-silicon dose-depth calculation as a standard shield-target combination output, as well as the shielding response of high charge (Z) shields such as tantalum (Ta). Finally, a shield optimization algorithm is used to guide the instrument designer with the choice of graded-Z shield analysis.
9. Dynamic Monte-Carlo modeling of hydrogen isotope reactive diffusive transport in porous graphite
Schneider, R.; Rai, A.; Mutzke, A.; Warrier, M.; Salonen, E.; Nordlund, K.
2007-08-01
An equal mixture of deuterium and tritium will be the fuel used in a fusion reactor. It is important to study the recycling and mixing of these hydrogen isotopes in graphite from several points of view: (i) impact on the ratio of deuterium to tritium in a reactor, (ii) continued use of graphite as a first wall and divertor material, and (iii) reaction with carbon atoms and the transport of hydrocarbons will provide insight into chemical erosion. Dynamic Monte-Carlo techniques are used to study the reactive-diffusive transport of hydrogen isotopes and interstitial carbon atoms in a 3-D porous graphite structure irradiated with hydrogen and deuterium and is compared with published experimental results for hydrogen re-emission and isotope exchange.
10. Monte Carlo Simulation of Electron Transport in 4H- and 6H-SiC
SciTech Connect
Sun, C. C.; You, A. H.; Wong, E. K.
2010-07-07
The Monte Carlo (MC) simulation of electron transport properties at high electric field region in 4H- and 6H-SiC are presented. This MC model includes two non-parabolic conduction bands. Based on the material parameters, the electron scattering rates included polar optical phonon scattering, optical phonon scattering and acoustic phonon scattering are evaluated. The electron drift velocity, energy and free flight time are simulated as a function of applied electric field at an impurity concentration of 1x10{sup 18} cm{sup 3} in room temperature. The simulated drift velocity with electric field dependencies is in a good agreement with experimental results found in literature. The saturation velocities for both polytypes are close, but the scattering rates are much more pronounced for 6H-SiC. Our simulation model clearly shows complete electron transport properties in 4H- and 6H-SiC.
11. Monte Carlo simulations of electron transport for electron beam-induced deposition of nanostructures
Salvat-Pujol, Francesc; Jeschke, Harald O.; Valenti, Roser
2013-03-01
Tungsten hexacarbonyl, W(CO)6, is a particularly interesting precursor molecule for electron beam-induced deposition of nanoparticles, since it yields deposits whose electronic properties can be tuned from metallic to insulating. However, the growth of tungsten nanostructures poses experimental difficulties: the metal content of the nanostructure is variable. Furthermore, fluctuations in the tungsten content of the deposits seem to trigger the growth of the nanostructure. Monte Carlo simulations of electron transport have been carried out with the radiation-transport code Penelope in order to study the charge and energy deposition of the electron beam in the deposit and in the substrate. These simulations allow us to examine the conditions under which nanostructure growth takes place and to highlight the relevant parameters in the process.
12. A full-band Monte Carlo model for hole transport in silicon
Jallepalli, S.; Rashed, M.; Shih, W.-K.; Maziar, C. M.; Tasch, A. F., Jr.
1997-03-01
Hole transport in bulk silicon is explored using an efficient and accurate Monte Carlo (MC) tool based on the local pseudopotential band structure. Acoustic and optical phonon scattering, ionized impurity scattering, and impact ionization are the dominant scattering mechanisms that have been included. In the interest of computational efficiency, momentum relaxation times have been used to describe ionized impurity scattering and self-scattering rates have been computed in a dynamic fashion. The temperature and doping dependence of low-field hole mobility is obtained and good agreement with experimental data has been observed. MC extracted impact ionization coefficients are also shown to agree well with published experimental data. Momentum and energy relaxation times are obtained as a function of the average hole energy for use in moment based hydrodynamic simulators. The MC model is suitable for studying both low-field and high-field hole transport in silicon.
13. Comparison of generalized transport and Monte-Carlo models of the escape of a minor species
NASA Technical Reports Server (NTRS)
Demars, H. G.; Barakat, A. R.; Schunk, R. W.
1993-01-01
The steady-state diffusion of a minor species through a static background species is studied using a Monte Carlo model and a generalized 16-moment transport model. The two models are in excellent agreement in the collision-dominated region and in the 'transition region'. In the 'collisionless' region the 16-moment solution contains two singularities, and physical meaning cannot be assigned to the solution in their vicinity. In all regions, agreement between the models is best for the distribution function and for the lower-order moments and is less good for higher-order moments. Moments of order higher than the heat flow and hence beyond the level of description provided by the transport model have a noticeable effect on the shape of distribution functions in the collisionless region.
14. Monte Carlo Calculation of Slow Electron Beam Transport in Solids:. Reflection Coefficient Theory Implications
Bentabet, A.
The reflection coefficient theory developed by Vicanek and Urbassek showed that the backscattering coefficient of light ions impinging on semi-infinite solid targets is strongly related to the range and the first transport cross-section as well. In this work and in the electron case, we show that not only the backscattering coefficient is, but also most of electron transport quantities (such as the mean penetration depth, the diffusion polar angles, the final backscattering energy, etc.), are strongly correlated to both these quantities (i.e. the range and the first transport cross-section). In addition, most of the electron transport quantities are weakly correlated to the distribution of the scattering angle and the total elastic cross-section as well. To make our study as straightforward and clear as possible, we have projected different input data of elastic cross-sections and ranges in our Monte Carlo code to study the mean penetration depth and the backscattering coefficient of slow electrons impinging on semi-infinite aluminum and gold in the energy range up to 10 keV. The possibility of extending the present study to other materials and other transport quantities using the same models is a valid process.
15. Monte Carlo simulation and Boltzmann equation analysis of non-conservative positron transport in H2
Bankovi?, A.; Dujko, S.; White, R. D.; Buckman, S. J.; Petrovi?, Z. Lj.
2012-05-01
This work reports on a new series of calculations of positron transport properties in molecular hydrogen under the influence of spatially homogeneous electric field. Calculations are performed using a Monte Carlo simulation technique and multi term theory for solving the Boltzmann equation. Values and general trends of the mean energy, drift velocity and diffusion coefficients as a function of the reduced electric field E/n0 are reported here. Emphasis is placed on the explicit and implicit effects of positronium (Ps) formation on the drift velocity and diffusion coefficients. Two important phenomena arise; first, for certain regions of E/n0 the bulk and flux components of the drift velocity and longitudinal diffusion coefficient are markedly different, both qualitatively and quantitatively. Second, and contrary to previous experience in electron swarm physics, there is negative differential conductivity (NDC) effect in the bulk drift velocity component with no indication of any NDC for the flux component. In order to understand this atypical manifestation of the drift and diffusion of positrons in H2 under the influence of electric field, the spatially dependent positron transport properties such as number of positrons, average energy and velocity and spatially resolved rate for Ps formation are calculated using a Monte Carlo simulation technique. The spatial variation of the positron average energy and extreme skewing of the spatial profile of positron swarm are shown to play a central role in understanding the phenomena.
16. Hybrid two-dimensional Monte-Carlo electron transport in self-consistent electromagnetic fields
SciTech Connect
Mason, R.J.; Cranfill, C.W.
1985-01-01
The physics and numerics of the hybrid electron transport code ANTHEM are described. The need for the hybrid modeling of laser generated electron transport is outlined, and a general overview of the hybrid implementation in ANTHEM is provided. ANTHEM treats the background ions and electrons in a laser target as coupled fluid components moving relative to a fixed Eulerian mesh. The laser converts cold electrons to an additional hot electron component which evolves on the mesh as either a third coupled fluid or as a set of Monte Carlo PIC particles. The fluids and particles move in two-dimensions through electric and magnetic fields calculated via the Implicit Moment method. The hot electrons are coupled to the background thermal electrons by Coulomb drag, and both the hot and cold electrons undergo Rutherford scattering against the ion background. Subtleties of the implicit E- and B-field solutions, the coupled hydrodynamics, and large time step Monte Carlo particle scattering are discussed. Sample applications are presented.
17. Particle Communication and Domain Neighbor Coupling: Scalable Domain Decomposed Algorithms for Monte Carlo Particle Transport
SciTech Connect
O'Brien, M. J.; Brantley, P. S.
2015-01-20
In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 221 = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.
18. Single-photon transport through an atomic chain coupled to a one-dimensional nanophotonic waveguide
Liao, Zeyang; Zeng, Xiaodong; Zhu, Shi-Yao; Zubairy, M. Suhail
2015-08-01
We study the dynamics of a single-photon pulse traveling through a linear atomic chain coupled to a one-dimensional (1D) single mode photonic waveguide. We derive a time-dependent dynamical theory for this collective many-body system which allows us to study the real time evolution of the photon transport and the atomic excitations. Our analytical result is consistent with previous numerical calculations when there is only one atom. For an atomic chain, the collective interaction between the atoms mediated by the waveguide mode can significantly change the dynamics of the system. The reflectivity of a photon can be tuned by changing the ratio of coupling strength and the photon linewidth or by changing the number of atoms in the chain. The reflectivity of a single-photon pulse with finite bandwidth can even approach 100 % . The spectrum of the reflected and transmitted photon can also be significantly different from the single-atom case. Many interesting physical phenomena can occur in this system such as the photonic band-gap effects, quantum entanglement generation, Fano-like interference, and superradiant effects. For engineering, this system may serve as a single-photon frequency filter, single-photon modulation, and may find important applications in quantum information.
19. Monte Carlo model of the transport in the atmosphere of relativistic electrons and ?-rays associated to TGF
Sarria, D.; Forme, F.; Blelly, P.
2013-12-01
Onboard TARANIS satellite, the CNES mission dedicated to the study of TLE and TGFs, IDEE and XGRE are the two instruments which will measure the relativistic electrons and X and gamma rays. At the altitude of the satellite, the fluxes have been significantly altered by the filtering of the atmosphere and the satellite only measures a subset of the particles. Therefore, the inverse problem, to get an information on the sources and on the mechanisms responsible for these emissions, is rather tough to tackle, especially if we want to take advantage of the other instruments which will provide indirect information on those particles. The only reasonable way to solve this problem is to embed in the data processing, a theoretical approach using a numerical model of the generation and the transport of these burst emissions. For this purpose, we start to develop a numerical Monte carlo model which solves the transport in the atmosphere of both relativistic electrons and gamma-rays. After a brief presentation of the model and the validation by comparison with GEANT 4, we discuss how the photons and electrons may be spatially dispersed as a function of their energy at the altitude of the satellite, depending on the source properties, and the impact that could have on the detection by the satellite. Then, we give preliminary results on the interaction of the energetic particles with the neutral atmosphere, mainly in term of production rate of excited states, which will accessible through MCP experiment, and ionized species, which are important for the electrodynamics.
20. Jet transport and photon bremsstrahlung via longitudinal and transverse scattering
Qin, Guang-You; Majumder, Abhijit
2015-04-01
We study the effect of multiple scatterings on the propagation of hard partons and the production of jet-bremsstrahlung photons inside a dense medium in the framework of deep-inelastic scattering off a large nucleus. We include the momentum exchanges in both longitudinal and transverse directions between the hard partons and the constituents of the medium. Keeping up to the second order in a momentum gradient expansion, we derive the spectrum for the photon emission from a hard quark jet when traversing dense nuclear matter. Our calculation demonstrates that the photon bremsstrahlung process is influenced not only by the transverse momentum diffusion of the propagating hard parton, but also by the longitudinal drag and diffusion of the parton momentum. A notable outcome is that the longitudinal drag tends to reduce the amount of stimulated emission from the hard parton.
1. Ballistic transport in one-dimensional random dimer photonic crystals
Cherid, Samira; Bentata, Samir; Zitouni, Ali; Djelti, Radouan; Aziz, Zoubir
2014-04-01
Using the transfer-matrix technique and the Kronig Penney model, we numerically and analytically investigate the effect of short-range correlated disorder in Random Dimer Model (RDM) on transmission properties of the light in one dimensional photonic crystals made of three different materials. Such systems consist of two different structures randomly distributed along the growth direction, with the additional constraint that one kind of these layers always appear in pairs. It is shown that the one dimensional random dimer photonic crystals support two types of extended modes. By shifting of the dimer resonance toward the host fundamental stationary resonance state, we demonstrate the existence of the ballistic response in these systems.
2. Dosimetric validation of Acuros XB with Monte Carlo methods for photon dose calculations
SciTech Connect
Bush, K.; Gagne, I. M.; Zavgorodni, S.; Ansbacher, W.; Beckham, W.
2011-04-15
Purpose: The dosimetric accuracy of the recently released Acuros XB advanced dose calculation algorithm (Varian Medical Systems, Palo Alto, CA) is investigated for single radiation fields incident on homogeneous and heterogeneous geometries, and a comparison is made to the analytical anisotropic algorithm (AAA). Methods: Ion chamber measurements for the 6 and 18 MV beams within a range of field sizes (from 4.0x4.0 to 30.0x30.0 cm{sup 2}) are used to validate Acuros XB dose calculations within a unit density phantom. The dosimetric accuracy of Acuros XB in the presence of lung, low-density lung, air, and bone is determined using BEAMnrc/DOSXYZnrc calculations as a benchmark. Calculations using the AAA are included for reference to a current superposition/convolution standard. Results: Basic open field tests in a homogeneous phantom reveal an Acuros XB agreement with measurement to within {+-}1.9% in the inner field region for all field sizes and energies. Calculations on a heterogeneous interface phantom were found to agree with Monte Carlo calculations to within {+-}2.0%({sigma}{sub MC}=0.8%) in lung ({rho}=0.24 g cm{sup -3}) and within {+-}2.9%({sigma}{sub MC}=0.8%) in low-density lung ({rho}=0.1 g cm{sup -3}). In comparison, differences of up to 10.2% and 17.5% in lung and low-density lung were observed in the equivalent AAA calculations. Acuros XB dose calculations performed on a phantom containing an air cavity ({rho}=0.001 g cm{sup -3}) were found to be within the range of {+-}1.5% to {+-}4.5% of the BEAMnrc/DOSXYZnrc calculated benchmark ({sigma}{sub MC}=0.8%) in the tissue above and below the air cavity. A comparison of Acuros XB dose calculations performed on a lung CT dataset with a BEAMnrc/DOSXYZnrc benchmark shows agreement within {+-}2%/2mm and indicates that the remaining differences are primarily a result of differences in physical material assignments within a CT dataset. Conclusions: By considering the fundamental particle interactions in matter based on theoretical interaction cross sections, the Acuros XB algorithm is capable of modeling radiotherapy dose deposition with accuracy only previously achievable with Monte Carlo techniques.
3. a Test Particle Model for Monte Carlo Simulation of Plasma Transport Driven by Quasineutrality
Kuhl, Nelson M.
1995-11-01
This paper is concerned with the problem of transport in controlled nuclear fusion as it applies to confinement in a tokamak or stellarator. We perform numerical experiments to validate a mathematical model of P. R. Garabedian in which the electric potential is determined by quasineutrality because of singular perturbation of the Poisson equation. The simulations are made using a transport code written by O. Betancourt and M. Taylor, with changes to incorporate our case studies. We adopt a test particle model naturally suggested by the problem of tracking particles in plasma physics. The statistics due to collisions are modeled by a drift kinetic equation whose numerical solution is based on the Monte Carlo method of A. Boozer and G. Kuo -Petravic. The collision operator drives the distribution function in velocity space towards the normal distribution, or Maxwellian. It is shown that details of the collision operator other than its dependence on the collision frequency and temperature matter little for transport, and the role of conservation of momentum is investigated. Exponential decay makes it possible to find the confinement times of both ions and electrons by high performance computing. Three -dimensional perturbations in the electromagnetic field model the anomalous transport of electrons and simulate the turbulent behavior that is presumably triggered by the displacement current. We make a convergence study of the method, derive scaling laws that are in good agreement with predictions from experimental data, and present a comparison with the JET experiment.
4. Recommended direct simulation Monte Carlo collision model parameters for modeling ionized air transport processes
Swaminathan-Gopalan, Krishnan; Stephani, Kelly A.
2016-02-01
A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.
5. Epithelial cancers and photon migration: Monte Carlo simulations and diffuse reflectance measurements
Tubiana, Jerome; Kass, Alex J.; Newman, Maya Y.; Levitz, David
2015-07-01
Detecting pre-cancer in epithelial tissues such as the cervix is a challenging task in low-resources settings. In an effort to achieve low cost cervical cancer screening and diagnostic method for use in low resource settings, mobile colposcopes that use a smartphone as their engine have been developed. Designing image analysis software suited for this task requires proper modeling of light propagation from the abnormalities inside tissues to the camera of the smartphones. Different simulation methods have been developed in the past, by solving light diffusion equations, or running Monte Carlo simulations. Several algorithms exist for the latter, including MCML and the recently developed MCX. For imaging purpose, the observable parameter of interest is the reflectance profile of a tissue under some specific pattern of illumination and optical setup. Extensions of the MCX algorithm to simulate this observable under these conditions were developed. These extensions were validated against MCML and diffusion theory for the simple case of contact measurements, and reflectance profiles under colposcopy imaging geometry were also simulated. To validate this model, the diffuse reflectance profiles of tissue phantoms were measured with a spectrometer under several illumination and optical settings for various homogeneous tissues phantoms. The measured reflectance profiles showed a non-trivial deviation across the spectrum. Measurements of an added absorber experiment on a series of phantoms showed that absorption of dye scales linearly when fit to both MCX and diffusion models. More work is needed to integrate a pupil into the experiment.
6. Improved Convergence Rate of Multi-Group Scattering Moment Tallies for Monte Carlo Neutron Transport Codes
Multi-group scattering moment matrices are critical to the solution of the multi-group form of the neutron transport equation, as they are responsible for describing the change in direction and energy of neutrons. These matrices, however, are difficult to correctly calculate from the measured nuclear data with both deterministic and stochastic methods. Calculating these parameters when using deterministic methods requires a set of assumptions which do not hold true in all conditions. These quantities can be calculated accurately with stochastic methods, however doing so is computationally expensive due to the poor efficiency of tallying scattering moment matrices. This work presents an improved method of obtaining multi-group scattering moment matrices from a Monte Carlo neutron transport code. This improved method of tallying the scattering moment matrices is based on recognizing that all of the outgoing particle information is known a priori and can be taken advantage of to increase the tallying efficiency (therefore reducing the uncertainty) of the stochastically integrated tallies. In this scheme, the complete outgoing probability distribution is tallied, supplying every one of the scattering moment matrices elements with its share of data. In addition to reducing the uncertainty, this method allows for the use of a track-length estimation process potentially offering even further improvement to the tallying efficiency. Unfortunately, to produce the needed distributions, the probability functions themselves must undergo an integration over the outgoing energy and scattering angle dimensions. This integration is too costly to perform during the Monte Carlo simulation itself and therefore must be performed in advance by way of a pre-processing code. The new method increases the information obtained from tally events and therefore has a significantly higher efficiency than the currently used techniques. The improved method has been implemented in a code system containing a new pre-processor code, NDPP, and a Monte Carlo neutron transport code, OpenMC. This method is then tested in a pin cell problem and a larger problem designed to accentuate the importance of scattering moment matrices. These tests show that accuracy was retained while the figure-of-merit for generating scattering moment matrices and fission energy spectra was significantly improved.
7. Monte Carlo analysis of high-frequency non-equilibrium transport in mercury-cadmium-telluride for infrared detection
Palermo, Christophe; Varani, Luca; Vaissière, Jean-Claude
2004-04-01
We present a theoretical analysis of both static and small-signal electron transport in Hg0.8Cd0.2Te in order to study the high-frequency behaviour of this material usually employed for infrared detection. Firstly we simulate static conditions by using a Monte Carlo simulation in order to extract transport parameters. Then, an analytical method based on hydrodynamic equations is used to perform the small-signal study by modelling the high-frequency differential mobility. This approach allows a full study of the frequency response for arbitrary electric fields starting only from static parameters and to overcome technical problems of direct Monte Carlo simulations.
8. Monte Carlo Study of Fetal Dosimetry Parameters for 6 MV Photon Beam
PubMed Central
Atarod, Maryam; Shokrani, Parvaneh
2013-01-01
Because of the adverse effects of ionizing radiation on fetuses, prior to radiotherapy of pregnant patients, fetal dose should be estimated. Fetal dose has been studied by several authors in different depths in phantoms with various abdomen thicknesses (ATs). In this study, the effect of maternal AT and depth in fetal dosimetry was investigated, using peripheral dose (PD) distribution evaluations. A BEAMnrc model of Oncor linac using out of beam components was used for dose calculations in out of field border. A 6 MV photon beam was used to irradiate a chest phantom. Measurements were done using EBT2 radiochromic film in a RW3 phantom as abdomen. The followings were measured for different ATs: Depth PD profiles at two distances from the field's edge, and in-plane PD profiles at two depths. The results of this study show that PD is depth dependent near the field's edge. The increase in AT does not change PD depth of maximum and its distribution as a function of distance from the field's edge. It is concluded that estimating the maximum fetal dose, using a flat phantom, i.e., without taking into account the AT, is possible. Furthermore, an in-plane profile measured at any depth can represent the dose variation as a function of distance. However, in order to estimate the maximum PD the depth of Dmax in out of field should be used for in-plane profile measurement. PMID:24083135
9. Study of the response of plastic scintillation detectors in small-field 6 MV photon beams by Monte Carlo simulations
SciTech Connect
Wang, Lilie L. W.; Beddar, Sam
2011-03-15
Purpose: To investigate the response of plastic scintillation detectors (PSDs) in a 6 MV photon beam of various field sizes using Monte Carlo simulations. Methods: Three PSDs were simulated: A BC-400 and a BCF-12, each attached to a plastic-core optical fiber, and a BC-400 attached to an air-core optical fiber. PSD response was calculated as the detector dose per unit water dose for field sizes ranging from 10x10 down to 0.5x0.5 cm{sup 2} for both perpendicular and parallel orientations of the detectors to an incident beam. Similar calculations were performed for a CC01 compact chamber. The off-axis dose profiles were calculated in the 0.5x0.5 cm{sup 2} photon beam and were compared to the dose profile calculated for the CC01 chamber and that calculated in water without any detector. The angular dependence of the PSDs' responses in a small photon beam was studied. Results: In the perpendicular orientation, the response of the BCF-12 PSD varied by only 0.5% as the field size decreased from 10x10 to 0.5x0.5 cm{sup 2}, while the response of BC-400 PSD attached to a plastic-core fiber varied by more than 3% at the smallest field size because of its longer sensitive region. In the parallel orientation, the response of both PSDs attached to a plastic-core fiber varied by less than 0.4% for the same range of field sizes. For the PSD attached to an air-core fiber, the response varied, at most, by 2% for both orientations. Conclusions: The responses of all the PSDs investigated in this work can have a variation of only 1%-2% irrespective of field size and orientation of the detector if the length of the sensitive region is not more than 2 mm long and the optical fiber stems are prevented from pointing directly to the incident source.
10. Influence of electrodes on the photon energy deposition in CVD-diamond dosimeters studied with the Monte Carlo code PENELOPE.
PubMed
Grka, B; Nilsson, B; Fernndez-Varea, J M; Svensson, R; Brahme, A
2006-08-01
A new dosimeter, based on chemical vapour deposited (CVD) diamond as the active detector material, is being developed for dosimetry in radiotherapeutic beams. CVD-diamond is a very interesting material, since its atomic composition is close to that of human tissue and in principle it can be designed to introduce negligible perturbations to the radiation field and the dose distribution in the phantom due to its small size. However, non-tissue-equivalent structural components, such as electrodes, wires and encapsulation, need to be carefully selected as they may induce severe fluence perturbation and angular dependence, resulting in erroneous dose readings. By introducing metallic electrodes on the diamond crystals, interface phenomena between high- and low-atomic-number materials are created. Depending on the direction of the radiation field, an increased or decreased detector signal may be obtained. The small dimensions of the CVD-diamond layer and electrodes (around 100 microm and smaller) imply a higher sensitivity to the lack of charged-particle equilibrium and may cause severe interface phenomena. In the present study, we investigate the variation of energy deposition in the diamond detector for different photon-beam qualities, electrode materials and geometric configurations using the Monte Carlo code PENELOPE. The prototype detector was produced from a 50 microm thick CVD-diamond layer with 0.2 microm thick silver electrodes on both sides. The mean absorbed dose to the detector's active volume was modified in the presence of the electrodes by 1.7%, 2.1%, 1.5%, 0.6% and 0.9% for 1.25 MeV monoenergetic photons, a complete (i.e. shielded) (60)Co photon source spectrum and 6, 18 and 50 MV bremsstrahlung spectra, respectively. The shift in mean absorbed dose increases with increasing atomic number and thickness of the electrodes, and diminishes with increasing thickness of the diamond layer. From a dosimetric point of view, graphite would be an almost perfect electrode material. This study shows that, for the considered therapeutic beam qualities, the perturbation of the detector signal due to charge-collecting graphite electrodes of thicknesses between 0.1 and 700 microm is negligible within the calculation uncertainty of 0.2%. PMID:16861769
11. Mesh-based Monte Carlo method for fibre-optic optogenetic neural stimulation with direct photon flux recording strategy.
PubMed
Shin, Younghoon; Kwon, Hyuk-Sang
2016-03-21
We propose a Monte Carlo (MC) method based on a direct photon flux recording strategy using inhomogeneous, meshed rodent brain atlas. This MC method was inspired by and dedicated to fibre-optics-based optogenetic neural stimulations, thus providing an accurate and direct solution for light intensity distributions in brain regions with different optical properties. Our model was used to estimate the 3D light intensity attenuation for close proximity between an implanted optical fibre source and neural target area for typical optogenetics applications. Interestingly, there are discrepancies with studies using a diffusion-based light intensity prediction model, perhaps due to use of improper light scattering models developed for far-field problems. Our solution was validated by comparison with the gold-standard MC model, and it enabled accurate calculations of internal intensity distributions in an inhomogeneous near light source domain. Thus our strategy can be applied to studying how illuminated light spreads through an inhomogeneous brain area, or for determining the amount of light required for optogenetic manipulation of a specific neural target area. PMID:26914289
12. Gel dosimetry measurements and Monte Carlo modeling for external radiotherapy photon beams: Comparison with a treatment planning system dose distribution
Valente, M.; Aon, E.; Brunetto, M.; Castellano, G.; Gallivanone, F.; Gambarini, G.
2007-09-01
Gel dosimetry has proved to be useful to determine absorbed dose distributions in radiotherapy, as well as to validate treatment plans. Gel dosimetry allows dose imaging and is particularly helpful for non-uniform dose distribution measurements, as may occur when multiple-field irradiation techniques are employed. In this work, we report gel-dosimetry measurements and Monte Carlo (PENELOPE ®) calculations for the dose distribution inside a tissue-equivalent phantom exposed to a typical multiple-field irradiation. Irradiations were performed with a 10 MV photon beam from a Varian ® Clinac 18 accelerator. The employed dosimeters consisted of layers of Fricke Xylenol Orange radiochromic gel. The method for absorbed dose imaging was based on analysis of visible light transmittance, usually detected by means of a CCD camera. With the aim of finding a simple method for light transmittance image acquisition, a commercial flatbed-like scanner was employed. The experimental and simulated dose distributions have been compared with those calculated with a commercially available treatment planning system, showing a reasonable agreement.
13. On the Monte Carlo simulation of small-field micro-diamond detectors for megavoltage photon dosimetry
Andreo, Pedro; Palmans, Hugo; Marteinsdóttir, Maria; Benmakhlouf, Hamza; Carlsson-Tedgren, Åsa
2016-01-01
Monte Carlo (MC) calculated detector-specific output correction factors for small photon beam dosimetry are commonly used in clinical practice. The technique, with a geometry description based on manufacturer blueprints, offers certain advantages over experimentally determined values but is not free of weaknesses. Independent MC calculations of output correction factors for a PTW-60019 micro-diamond detector were made using the EGSnrc and PENELOPE systems. Compared with published experimental data the MC results showed substantial disagreement for the smallest field size simulated (5~\\text{mm}× 5 mm). To explain the difference between the two datasets, a detector was imaged with x rays searching for possible anomalies in the detector construction or details not included in the blueprints. A discrepancy between the dimension stated in the blueprints for the active detector area and that estimated from the electrical contact seen in the x-ray image was observed. Calculations were repeated using the estimate of a smaller volume, leading to results in excellent agreement with the experimental data. MC users should become aware of the potential differences between the design blueprints of a detector and its manufacturer production, as they may differ substantially. The constraint is applicable to the simulation of any detector type. Comparison with experimental data should be used to reveal geometrical inconsistencies and details not included in technical drawings, in addition to the well-known QA procedure of detector x-ray imaging.
14. On the Monte Carlo simulation of small-field micro-diamond detectors for megavoltage photon dosimetry.
PubMed
Andreo, Pedro; Palmans, Hugo; Marteinsdóttir, Maria; Benmakhlouf, Hamza; Carlsson-Tedgren, Åsa
2016-01-01
Monte Carlo (MC) calculated detector-specific output correction factors for small photon beam dosimetry are commonly used in clinical practice. The technique, with a geometry description based on manufacturer blueprints, offers certain advantages over experimentally determined values but is not free of weaknesses. Independent MC calculations of output correction factors for a PTW-60019 micro-diamond detector were made using the EGSnrc and PENELOPE systems. Compared with published experimental data the MC results showed substantial disagreement for the smallest field size simulated ([Formula: see text] mm). To explain the difference between the two datasets, a detector was imaged with x rays searching for possible anomalies in the detector construction or details not included in the blueprints. A discrepancy between the dimension stated in the blueprints for the active detector area and that estimated from the electrical contact seen in the x-ray image was observed. Calculations were repeated using the estimate of a smaller volume, leading to results in excellent agreement with the experimental data. MC users should become aware of the potential differences between the design blueprints of a detector and its manufacturer production, as they may differ substantially. The constraint is applicable to the simulation of any detector type. Comparison with experimental data should be used to reveal geometrical inconsistencies and details not included in technical drawings, in addition to the well-known QA procedure of detector x-ray imaging. PMID:26630437
15. Understanding the lateral dose response functions of high-resolution photon detectors by reverse Monte Carlo and deconvolution analysis.
PubMed
Looe, Hui Khee; Harder, Dietrich; Poppe, Björn
2015-08-21
The purpose of the present study is to understand the mechanism underlying the perturbation of the field of the secondary electrons, which occurs in the presence of a detector in water as the surrounding medium. By means of 'reverse' Monte Carlo simulation, the points of origin of the secondary electrons contributing to the detector's signal are identified and associated with the detector's mass density, electron density and atomic composition. The spatial pattern of the origin of these secondary electrons, in addition to the formation of the detector signal by components from all parts of its sensitive volume, determines the shape of the lateral dose response function, i.e. of the convolution kernel K(x,y) linking the lateral profile of the absorbed dose in the undisturbed surrounding medium with the associated profile of the detector's signal. The shape of the convolution kernel is shown to vary essentially with the electron density of the detector's material, and to be attributable to the relative contribution by the signal-generating secondary electrons originating within the detector's volume to the total detector signal. Finally, the representation of the over- or underresponse of a photon detector by this density-dependent convolution kernel will be applied to provide a new analytical expression for the associated volume effect correction factor. PMID:26267311
16. Guiding electromagnetic waves around sharp corners: topologically protected photonic transport in meta-waveguides (Presentation Recording)
Shvets, Gennady B.; Khanikaev, Alexander B.; Ma, Tzuhsuan; Lai, Kueifu
2015-09-01
Science thrives on analogies, and a considerable number of inventions and discoveries have been made by pursuing an unexpected connection to a very different field of inquiry. For example, photonic crystals have been referred to as "semiconductors of light" because of the far-reaching analogies between electron propagation in a crystal lattice and light propagation in a periodically modulated photonic environment. However, two aspects of electron behavior, its spin and helicity, escaped emulation by photonic systems until recent invention of photonic topological insulators (PTIs). The impetus for these developments in photonics came from the discovery of topologically nontrivial phases in condensed matter physics enabling edge states immune to scattering. The realization of topologically protected transport in photonics would circumvent a fundamental limitation imposed by the wave equation: inability of reflections-free light propagation along sharply bent pathway. Topologically protected electromagnetic states could be used for transporting photons without any scattering, potentially underpinning new revolutionary concepts in applied science and engineering. I will demonstrate that a PTI can be constructed by applying three types of perturbations: (a) finite bianisotropy, (b) gyromagnetic inclusion breaking the time-reversal (T) symmetry, and (c) asymmetric rods breaking the parity (P) symmetry. We will experimentally demonstrate (i) the existence of the full topological bandgap in a bianisotropic, and (ii) the reflectionless nature of wave propagation along the interface between two PTIs with opposite signs of the bianisotropy.
17. Bone and mucosal dosimetry in skin radiation therapy: a Monte Carlo study using kilovoltage photon and megavoltage electron beams
Chow, James C. L.; Jiang, Runqing
2012-06-01
This study examines variations of bone and mucosal doses with variable soft tissue and bone thicknesses, mimicking the oral or nasal cavity in skin radiation therapy. Monte Carlo simulations (EGSnrc-based codes) using the clinical kilovoltage (kVp) photon and megavoltage (MeV) electron beams, and the pencil-beam algorithm (Pinnacle3 treatment planning system) using the MeV electron beams were performed in dose calculations. Phase-space files for the 105 and 220 kVp beams (Gulmay D3225 x-ray machine), and the 4 and 6?MeV electron beams (Varian 21 EX linear accelerator) with a field size of 5 cm diameter were generated using the BEAMnrc code, and verified using measurements. Inhomogeneous phantoms containing uniform water, bone and air layers were irradiated by the kVp photon and MeV electron beams. Relative depth, bone and mucosal doses were calculated for the uniform water and bone layers which were varied in thickness in the ranges of 0.5-2 cm and 0.2-1 cm. A uniform water layer of bolus with thickness equal to the depth of maximum dose (dmax) of the electron beams (0.7 cm for 4 MeV and 1.5 cm for 6 MeV) was added on top of the phantom to ensure that the maximum dose was at the phantom surface. From our Monte Carlo results, the 4 and 6 MeV electron beams were found to produce insignificant bone and mucosal dose (<1%), when the uniform water layer at the phantom surface was thicker than 1.5 cm. When considering the 0.5 cm thin uniform water and bone layers, the 4 MeV electron beam deposited less bone and mucosal dose than the 6 MeV beam. Moreover, it was found that the 105 kVp beam produced more than twice the dose to bone than the 220 kVp beam when the uniform water thickness at the phantom surface was small (0.5 cm). However, the difference in bone dose enhancement between the 105 and 220 kVp beams became smaller when the thicknesses of the uniform water and bone layers in the phantom increased. Dose in the second bone layer interfacing with air was found to be higher for the 220 kVp beam than that of the 105 kVp beam, when the bone thickness was 1 cm. In this study, dose deviations of bone and mucosal layers of 18% and 17% were found between our results from Monte Carlo simulation and the pencil-beam algorithm, which overestimated the doses. Relative depth, bone and mucosal doses were studied by varying the beam nature, beam energy and thicknesses of the bone and uniform water using an inhomogeneous phantom to model the oral or nasal cavity. While the dose distribution in the pharynx region is unavailable due to the lack of a commercial treatment planning system commissioned for kVp beam planning in skin radiation therapy, our study provided an essential insight into the radiation staff to justify and estimate bone and mucosal dose.
18. Improved cache performance in Monte Carlo transport calculations using energy banding
Siegel, A.; Smith, K.; Felker, K.; Romano, P.; Forget, B.; Beckman, P.
2014-04-01
We present an energy banding algorithm for Monte Carlo (MC) neutral particle transport simulations which depend on large cross section lookup tables. In MC codes, read-only cross section data tables are accessed frequently, exhibit poor locality, and are typically too much large to fit in fast memory. Thus, performance is often limited by long latencies to RAM, or by off-node communication latencies when the data footprint is very large and must be decomposed on a distributed memory machine. The proposed energy banding algorithm allows maximal temporal reuse of data in band sizes that can flexibly accommodate different architectural features. The energy banding algorithm is general and has a number of benefits compared to the traditional approach. In the present analysis we explore its potential to achieve improvements in time-to-solution on modern cache-based architectures.
19. Massively parallel kinetic Monte Carlo simulations of charge carrier transport in organic semiconductors
van der Kaap, N. J.; Koster, L. J. A.
2016-02-01
A parallel, lattice based Kinetic Monte Carlo simulation is developed that runs on a GPGPU board and includes Coulomb like particle-particle interactions. The performance of this computationally expensive problem is improved by modifying the interaction potential due to nearby particle moves, instead of fully recalculating it. This modification is achieved by adding dipole correction terms that represent the particle move. Exact evaluation of these terms is guaranteed by representing all interactions as 32-bit floating numbers, where only the integers between -222 and 222 are used. We validate our method by modelling the charge transport in disordered organic semiconductors, including Coulomb interactions between charges. Performance is mainly governed by the particle density in the simulation volume, and improves for increasing densities. Our method allows calculations on large volumes including particle-particle interactions, which is important in the field of organic semiconductors.
20. Towards scalable parellelism in Monte Carlo particle transport codes using remote memory access
SciTech Connect
Romano, Paul K; Brown, Forrest B; Forget, Benoit
2010-01-01
One forthcoming challenge in the area of high-performance computing is having the ability to run large-scale problems while coping with less memory per compute node. In this work, they investigate a novel data decomposition method that would allow Monte Carlo transport calculations to be performed on systems with limited memory per compute node. In this method, each compute node remotely retrieves a small set of geometry and cross-section data as needed and remotely accumulates local tallies when crossing the boundary of the local spatial domain. initial results demonstrate that while the method does allow large problems to be run in a memory-limited environment, achieving scalability may be difficult due to inefficiencies in the current implementation of RMA operations.
1. Monte Carlo simulation of ballistic transport in high-mobility channels
Sabatini, G.; Marinchio, H.; Palermo, C.; Varani, L.; Daoud, T.; Teissier, R.; Rodilla, H.; Gonzlez, T.; Mateos, J.
2009-11-01
By means of Monte Carlo simulations coupled with a two-dimensional Poisson solver, we evaluate directly the possibility to use high mobility materials in ultra fast devices exploiting ballistic transport. To this purpose, we have calculated specific physical quantities such as the transit time, the transit velocity, the free flight time and the mean free path as functions of applied voltage in InAs channels with different lengths, from 2000 nm down to 50 nm. In this way the transition from diffusive to ballistic transport is carefully described. We remark a high value of the mean transit velocity with a maximum of 14105 m/s for a 50 nm-long channel and a transit time shorter than 0.1 ps, corresponding to a cutoff frequency in the terahertz domain. The percentage of ballistic electrons and the number of scatterings as functions of distance are also reported, showing the strong influence of quasi-ballistic transport in the shorter channels.
2. Monte Carlo modeling of transport in PbSe nanocrystal films
SciTech Connect
Carbone, I. Carter, S. A.; Zimanyi, G. T.
2013-11-21
A Monte Carlo hopping model was developed to simulate electron and hole transport in nanocrystalline PbSe films. Transport is carried out as a series of thermally activated hopping events between neighboring sites on a cubic lattice. Each site, representing an individual nanocrystal, is assigned a size-dependent electronic structure, and the effects of particle size, charging, interparticle coupling, and energetic disorder on electron and hole mobilities were investigated. Results of simulated field-effect measurements confirm that electron mobilities and conductivities at constant carrier densities increase with particle diameter by an order of magnitude up to 5?nm and begin to decrease above 6?nm. We find that as particle size increases, fewer hops are required to traverse the same distance and that site energy disorder significantly inhibits transport in films composed of smaller nanoparticles. The dip in mobilities and conductivities at larger particle sizes can be explained by a decrease in tunneling amplitudes and by charging penalties that are incurred more frequently when carriers are confined to fewer, larger nanoparticles. Using a nearly identical set of parameter values as the electron simulations, hole mobility simulations confirm measurements that increase monotonically with particle size over two orders of magnitude.
3. Cartesian Meshing Impacts for PWR Assemblies in Multigroup Monte Carlo and Sn Transport
Manalo, K.; Chin, M.; Sjoden, G.
2014-06-01
Hybrid methods of neutron transport have increased greatly in use, for example, in applications of using both Monte Carlo and deterministic transport to calculate quantities of interest, such as flux and eigenvalue in a nuclear reactor. Many 3D parallel Sn codes apply a Cartesian mesh, and thus for nuclear reactors the representation of curved fuels (cylinder, sphere, etc.) are impacted in the representation of proper fuel inventory (both in deviation of mass and exact geometry representation). For a PWR assembly eigenvalue problem, we explore the errors associated with this Cartesian discrete mesh representation, and perform an analysis to calculate a slope parameter that relates the pcm to the percent areal/volumetric deviation (areal corresponds to 2D and volumetric to 3D, respectively). Our initial analysis demonstrates a linear relationship between pcm change and areal/volumetric deviation using Multigroup MCNP on a PWR assembly compared to a reference exact combinatorial MCNP geometry calculation. For the same multigroup problems, we also intend to characterize this linear relationship in discrete ordinates (3D PENTRAN) and discuss issues related to transport cross-comparison. In addition, we discuss auto-conversion techniques with our 3D Cartesian mesh generation tools to allow for full generation of MCNP5 inputs (Cartesian mesh and Multigroup XS) from a basis PENTRAN Sn model.
4. Observing gas and dust in simulations of star formation with Monte Carlo radiation transport on Voronoi meshes
Hubber, D. A.; Ercolano, B.; Dale, J.
2016-02-01
Ionizing feedback from massive stars dramatically affects the interstellar medium local to star-forming regions. Numerical simulations are now starting to include enough complexity to produce morphologies and gas properties that are not too dissimilar from observations. The comparison between the density fields produced by hydrodynamical simulations and observations at given wavelengths relies however on photoionization/chemistry and radiative transfer calculations. We present here an implementation of Monte Carlo radiation transport through a Voronoi tessellation in the photoionization and dust radiative transfer code MOCASSIN. We show for the first time a synthetic spectrum and synthetic emission line maps of a hydrodynamical simulation of a molecular cloud affected by massive stellar feedback. We show that the approach on which previous work is based, which remapped hydrodynamical density fields on to Cartesian grids before performing radiative transfer/photoionization calculations, results in significant errors in the temperature and ionization structure of the region. Furthermore, we describe the mathematical process of tracing photon energy packets through a Voronoi tessellation, including optimizations, treating problematic cases and boundary conditions. We perform various benchmarks using both the original version of MOCASSIN and the modified version using the Voronoi tessellation. We show that for uniform grids, or equivalently a cubic lattice of cell generating points, the new Voronoi version gives the same results as the original Cartesian grid version of MOCASSIN for all benchmarks. For non-uniform initial conditions, such as using snapshots from smoothed particle hydrodynamics simulations, we show that the Voronoi version performs better than the Cartesian grid version, resulting in much better resolution in dense regions.
5. The validity of the density scaling method in primary electron transport for photon and electron beams
SciTech Connect
Woo, M.K.; Cunningham, J.R. )
1990-03-01
In the convolution/superposition method of photon beam dose calculations, inhomogeneities are usually handled by using some form of scaling involving the relative electron densities of the inhomogeneities. In this paper the accuracy of density scaling as applied to primary electrons generated in photon interactions is examined. Monte Carlo calculations are compared with density scaling calculations for air and cork slab inhomogeneities. For individual primary photon kernels as well as for photon interactions restricted to a thin layer, the results can differ significantly, by up to 50%, between the two calculations. However, for realistic photon beams where interactions occur throughout the whole irradiated volume, the discrepancies are much less severe. The discrepancies for the kernel calculation are attributed to the scattering characteristics of the electrons and the consequent oversimplified modeling used in the density scaling method. A technique called the kernel integration technique is developed to analyze the general effects of air and cork inhomogeneities. It is shown that the discrepancies become significant only under rather extreme conditions, such as immediately beyond the surface after a large air gap. In electron beams all the primary electrons originate from the surface of the phantom and the errors caused by simple density scaling can be much more significant. Various aspects relating to the accuracy of density scaling for air and cork slab inhomogeneities are discussed.
6. Technical Note: Study of the electron transport parameters used in PENELOPE for the Monte Carlo simulation of Linac targets
SciTech Connect
Rodriguez, Miguel; Sempau, Josep; Brualla, Lorenzo
2015-06-15
Purpose: The Monte Carlo simulation of electron transport in Linac targets using the condensed history technique is known to be problematic owing to a potential dependence of absorbed dose distributions on the electron step length. In the PENELOPE code, the step length is partially determined by the transport parameters C1 and C2. The authors have investigated the effect on the absorbed dose distribution of the values given to these parameters in the target. Methods: A monoenergetic 6.26 MeV electron pencil beam from a point source was simulated impinging normally on a cylindrical tungsten target. Electrons leaving the tungsten were discarded. Radial absorbed dose profiles were obtained at 1.5 cm of depth in a water phantom located at 100 cm for values of C1 and C2 in the target both equal to 0.1, 0.01, or 0.001. A detailed simulation case was also considered and taken as the reference. Additionally, lateral dose profiles were estimated and compared with experimental measurements for a 6 MV photon beam of a Varian Clinac 2100 for the cases of C1 and C2 both set to 0.1 or 0.001 in the target. Results: On the central axis, the dose obtained for the case C1 = C2 = 0.1 shows a deviation of (17.2% ± 1.2%) with respect to the detailed simulation. This difference decreases to (3.7% ± 1.2%) for the case C1 = C2 = 0.01. The case C1 = C2 = 0.001 produces a radial dose profile that is equivalent to that of the detailed simulation within the reached statistical uncertainty of 1%. The effect is also appreciable in the crossline dose profiles estimated for the realistic geometry of the Linac. In another simulation, it was shown that the error made by choosing inappropriate transport parameters can be masked by tuning the energy and focal spot size of the initial beam. Conclusions: The use of large path lengths for the condensed simulation of electrons in a Linac target with PENELOPE conducts to deviations of the dose in the patient or phantom. Based on the results obtained in this work, values of C1 and C2 larger than 0.001 should not be used in Linac targets without further investigation.
7. Automatic commissioning of a GPU-based Monte Carlo radiation dose calculation code for photon radiotherapy
Tian, Zhen; Jiang Graves, Yan; Jia, Xun; Jiang, Steve B.
2014-10-01
Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6 MV beam was used to generate the PSLs for 6 MV beams. In a simulation study, we commissioned a Siemens 6 MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D γ-index test passing rate within the regions with dose above 10% of dmax dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2 mm criteria and from 32.22 to 89.65% for 1%/1 mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The passing rate of the γ-index test within the 10% isodose line of the prescription dose was improved from 92.73 to 99.70% and from 82.16 to 96.73% for 2%/2 mm and 1%/1 mm criteria, respectively. Real clinical data measured from Varian, Siemens, and Elekta linear accelerators were also used to validate our commissioning method and a similar level of accuracy was achieved.
8. Automatic commissioning of a GPU-based Monte Carlo radiation dose calculation code for photon radiotherapy.
PubMed
Tian, Zhen; Graves, Yan Jiang; Jia, Xun; Jiang, Steve B
2014-11-01
Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6 MV beam was used to generate the PSLs for 6 MV beams. In a simulation study, we commissioned a Siemens 6 MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D γ-index test passing rate within the regions with dose above 10% of dmax dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2 mm criteria and from 32.22 to 89.65% for 1%/1 mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The passing rate of the γ-index test within the 10% isodose line of the prescription dose was improved from 92.73 to 99.70% and from 82.16 to 96.73% for 2%/2 mm and 1%/1 mm criteria, respectively. Real clinical data measured from Varian, Siemens, and Elekta linear accelerators were also used to validate our commissioning method and a similar level of accuracy was achieved. PMID:25295381
9. Monte Carlo based method for conversion of in-situ gamma ray spectra obtained with a portable Ge detector to an incident photon flux energy distribution.
PubMed
Clouvas, A; Xanthos, S; Antonopoulos-Domis, M; Silva, J
1998-02-01
A Monte Carlo based method for the conversion of an in-situ gamma-ray spectrum obtained with a portable Ge detector to photon flux energy distribution is proposed. The spectrum is first stripped of the partial absorption and cosmic-ray events leaving only the events corresponding to the full absorption of a gamma ray. Applying to the resulting spectrum the full absorption efficiency curve of the detector determined by calibrated point sources and Monte Carlo simulations, the photon flux energy distribution is deduced. The events corresponding to partial absorption in the detector are determined by Monte Carlo simulations for different incident photon energies and angles using the CERN's GEANT library. Using the detector's characteristics given by the manufacturer as input it is impossible to reproduce experimental spectra obtained with point sources. A transition zone of increasing charge collection efficiency has to be introduced in the simulation geometry, after the inactive Ge layer, in order to obtain good agreement between the simulated and experimental spectra. The functional form of the charge collection efficiency is deduced from a diffusion model. PMID:9450590
10. Evaluation of Monte Carlo Electron-Transport Algorithms in the Integrated Tiger Series Codes for Stochastic-Media Simulations
Franke, Brian C.; Kensek, Ronald P.; Prinja, Anil K.
2014-06-01
Stochastic-media simulations require numerous boundary crossings. We consider two Monte Carlo electron transport approaches and evaluate accuracy with numerous material boundaries. In the condensed-history method, approximations are made based on infinite-medium solutions for multiple scattering over some track length. Typically, further approximations are employed for material-boundary crossings where infinite-medium solutions become invalid. We have previously explored an alternative "condensed transport" formulation, a Generalized Boltzmann-Fokker-Planck GBFP method, which requires no special boundary treatment but instead uses approximations to the electron-scattering cross sections. Some limited capabilities for analog transport and a GBFP method have been implemented in the Integrated Tiger Series (ITS) codes. Improvements have been made to the condensed history algorithm. The performance of the ITS condensed-history and condensed-transport algorithms are assessed for material-boundary crossings. These assessments are made both by introducing artificial material boundaries and by comparison to analog Monte Carlo simulations.
11. Pre-conditioned backward Monte Carlo solutions to radiative transport in planetary atmospheres. Fundamentals: Sampling of propagation directions in polarising media
García Muñoz, A.; Mills, F. P.
2015-01-01
Context. The interpretation of polarised radiation emerging from a planetary atmosphere must rely on solutions to the vector radiative transport equation (VRTE). Monte Carlo integration of the VRTE is a valuable approach for its flexible treatment of complex viewing and/or illumination geometries, and it can intuitively incorporate elaborate physics. Aims: We present a novel pre-conditioned backward Monte Carlo (PBMC) algorithm for solving the VRTE and apply it to planetary atmospheres irradiated from above. As classical BMC methods, our PBMC algorithm builds the solution by simulating the photon trajectories from the detector towards the radiation source, i.e. in the reverse order of the actual photon displacements. Methods: We show that the neglect of polarisation in the sampling of photon propagation directions in classical BMC algorithms leads to unstable and biased solutions for conservative, optically-thick, strongly polarising media such as Rayleigh atmospheres. The numerical difficulty is avoided by pre-conditioning the scattering matrix with information from the scattering matrices of prior (in the BMC integration order) photon collisions. Pre-conditioning introduces a sense of history in the photon polarisation states through the simulated trajectories. Results: The PBMC algorithm is robust, and its accuracy is extensively demonstrated via comparisons with examples drawn from the literature for scattering in diverse media. Since the convergence rate for MC integration is independent of the integral's dimension, the scheme is a valuable option for estimating the disk-integrated signal of stellar radiation reflected from planets. Such a tool is relevant in the prospective investigation of exoplanetary phase curves. We lay out two frameworks for disk integration and, as an application, explore the impact of atmospheric stratification on planetary phase curves for large star-planet-observer phase angles. By construction, backward integration provides a better control than forward integration over the planet region contributing to the solution, and this presents a clear advantage when estimating the disk-integrated signal at moderate and large phase angles. A one-slab, plane-parallel version of the PBMC algorithm is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/573/A72
12. The lower timing resolution bound for scintillators with non-negligible optical photon transport time in time-of-flight PET.
PubMed
Vinke, Ruud; Olcott, Peter D; Cates, Joshua W; Levin, Craig S
2014-10-21
In this work, a method is presented that can calculate the lower bound of the timing resolution for large scintillation crystals with non-negligible photon transport. Hereby, the timing resolution bound can directly be calculated from Monte Carlo generated arrival times of the scintillation photons. This method extends timing resolution bound calculations based on analytical equations, as crystal geometries can be evaluated that do not have closed form solutions of arrival time distributions. The timing resolution bounds are calculated for an exemplary 3mmנ3mmנ20mm LYSO crystal geometry, with scintillation centers exponentially spread along the crystal length as well as with scintillation centers at fixed distances from the photosensor. Pulse shape simulations further show that analog photosensors intrinsically operate near the timing resolution bound, which can be attributed to the finite single photoelectron pulse rise time. PMID:25255807
13. Experimental verification of a commercial Monte Carlo-based dose calculation module for high-energy photon beams.
PubMed
Künzler, Thomas; Fotina, Irina; Stock, Markus; Georg, Dietmar
2009-12-21
The dosimetric performance of a Monte Carlo algorithm as implemented in a commercial treatment planning system (iPlan, BrainLAB) was investigated. After commissioning and basic beam data tests in homogenous phantoms, a variety of single regular beams and clinical field arrangements were tested in heterogeneous conditions (conformal therapy, arc therapy and intensity-modulated radiotherapy including simultaneous integrated boosts). More specifically, a cork phantom containing a concave-shaped target was designed to challenge the Monte Carlo algorithm in more complex treatment cases. All test irradiations were performed on an Elekta linac providing 6, 10 and 18 MV photon beams. Absolute and relative dose measurements were performed with ion chambers and near tissue equivalent radiochromic films which were placed within a transverse plane of the cork phantom. For simple fields, a 1D gamma (gamma) procedure with a 2% dose difference and a 2 mm distance to agreement (DTA) was applied to depth dose curves, as well as to inplane and crossplane profiles. The average gamma value was 0.21 for all energies of simple test cases. For depth dose curves in asymmetric beams similar gamma results as for symmetric beams were obtained. Simple regular fields showed excellent absolute dosimetric agreement to measurement values with a dose difference of 0.1% +/- 0.9% (1 standard deviation) at the dose prescription point. A more detailed analysis at tissue interfaces revealed dose discrepancies of 2.9% for an 18 MV energy 10 x 10 cm(2) field at the first density interface from tissue to lung equivalent material. Small fields (2 x 2 cm(2)) have their largest discrepancy in the re-build-up at the second interface (from lung to tissue equivalent material), with a local dose difference of about 9% and a DTA of 1.1 mm for 18 MV. Conformal field arrangements, arc therapy, as well as IMRT beams and simultaneous integrated boosts were in good agreement with absolute dose measurements in the heterogeneous phantom. For the clinical test cases, the average dose discrepancy was 0.5% +/- 1.1%. Relative dose investigations of the transverse plane for clinical beam arrangements were performed with a 2D gamma-evaluation procedure. For 3% dose difference and 3 mm DTA criteria, the average value for gamma(>1) was 4.7% +/- 3.7%, the average gamma(1%) value was 1.19 +/- 0.16 and the mean 2D gamma-value was 0.44 +/- 0.07 in the heterogeneous phantom. The iPlan MC algorithm leads to accurate dosimetric results under clinical test conditions. PMID:19934489
14. Experimental verification of a commercial Monte Carlo-based dose calculation module for high-energy photon beams
Künzler, Thomas; Fotina, Irina; Stock, Markus; Georg, Dietmar
2009-12-01
The dosimetric performance of a Monte Carlo algorithm as implemented in a commercial treatment planning system (iPlan, BrainLAB) was investigated. After commissioning and basic beam data tests in homogenous phantoms, a variety of single regular beams and clinical field arrangements were tested in heterogeneous conditions (conformal therapy, arc therapy and intensity-modulated radiotherapy including simultaneous integrated boosts). More specifically, a cork phantom containing a concave-shaped target was designed to challenge the Monte Carlo algorithm in more complex treatment cases. All test irradiations were performed on an Elekta linac providing 6, 10 and 18 MV photon beams. Absolute and relative dose measurements were performed with ion chambers and near tissue equivalent radiochromic films which were placed within a transverse plane of the cork phantom. For simple fields, a 1D gamma (γ) procedure with a 2% dose difference and a 2 mm distance to agreement (DTA) was applied to depth dose curves, as well as to inplane and crossplane profiles. The average gamma value was 0.21 for all energies of simple test cases. For depth dose curves in asymmetric beams similar gamma results as for symmetric beams were obtained. Simple regular fields showed excellent absolute dosimetric agreement to measurement values with a dose difference of 0.1% ± 0.9% (1 standard deviation) at the dose prescription point. A more detailed analysis at tissue interfaces revealed dose discrepancies of 2.9% for an 18 MV energy 10 × 10 cm2 field at the first density interface from tissue to lung equivalent material. Small fields (2 × 2 cm2) have their largest discrepancy in the re-build-up at the second interface (from lung to tissue equivalent material), with a local dose difference of about 9% and a DTA of 1.1 mm for 18 MV. Conformal field arrangements, arc therapy, as well as IMRT beams and simultaneous integrated boosts were in good agreement with absolute dose measurements in the heterogeneous phantom. For the clinical test cases, the average dose discrepancy was 0.5% ± 1.1%. Relative dose investigations of the transverse plane for clinical beam arrangements were performed with a 2D γ-evaluation procedure. For 3% dose difference and 3 mm DTA criteria, the average value for γ>1 was 4.7% ± 3.7%, the average γ1% value was 1.19 ± 0.16 and the mean 2D γ-value was 0.44 ± 0.07 in the heterogeneous phantom. The iPlan MC algorithm leads to accurate dosimetric results under clinical test conditions.
15. MCNP: Photon benchmark problems
SciTech Connect
Whalen, D.J.; Hollowell, D.E.; Hendricks, J.S.
1991-09-01
The recent widespread, markedly increased use of radiation transport codes has produced greater user and institutional demand for assurance that such codes give correct results. Responding to these pressing requirements for code validation, the general purpose Monte Carlo transport code MCNP has been tested on six different photon problem families. MCNP was used to simulate these six sets numerically. Results for each were compared to the set's analytical or experimental data. MCNP successfully predicted the analytical or experimental results of all six families within the statistical uncertainty inherent in the Monte Carlo method. From this we conclude that MCNP can accurately model a broad spectrum of photon transport problems. 8 refs., 30 figs., 5 tabs.
16. One-dimensional hopping transport in disordered organic solids. II. Monte Carlo simulations
Kohary, K.; Cordes, H.; Baranovskii, S. D.; Thomas, P.; Yamasaki, S.; Hensel, F.; Wendorff, J.-H.
2001-03-01
Drift mobility of charge carriers in strongly anisotropic disordered organic media is studied by Monte Carlo computer simulations. Results for the nearest-neighbor hopping are in excellent agreement with those of the analytic theory (Cordes et al., preceding paper). It is widely believed that the low-field drift mobility in disordered organic solids has the form ?~exp[-(T0/T)2] with characteristic temperature T0 depending solely on the scale of the energy distribution of localized states responsible for transport. Taking into account electron transitions to more distant sites than the nearest neighbors, we show that this dependence is not universal and parameter T0 depends also on the concentration of localized states and on the decay length of the electron wave function in localized states. The results of computer simulation evidence that correlations in the distribution of localized states influence essentially not only the field dependence as known from the literature, but also the temperature dependence of the drift mobility. In particular, strong space-energy correlations diminish the role of long-range hopping transitions in the charge carrier transport.
17. A new Monte Carlo program for simulating light transport through Port Wine Stain skin.
PubMed
Lister, T; Wright, P A; Chappell, P H
2014-05-01
A new Monte Carlo program is presented for simulating light transport through clinically normal skin and skin containing Port Wine Stain (PWS) vessels. The program consists of an eight-layer mathematical skin model constructed from optical coefficients described previously. A simulation including diffuse illumination at the surface and subsequent light transport through the model is carried out using a radiative transfer theory ray-tracing technique. Total reflectance values over 39 wavelengths are scored by the addition of simulated light returning to the surface within a specified region and surface reflections (calculated using Fresnel's equations). These reflectance values are compared to measurements from individual participants, and characteristics of the model are adjusted until adequate agreement is produced between simulated and measured skin reflectance curves. The absorption and scattering coefficients of the epidermis are adjusted through changes in the simulated concentrations and mean diameters of epidermal melanosomes to reproduce non-lesional skin colour. Pseudo-cylindrical horizontal vessels are added to the skin model, and their simulated mean depths, diameters and number densities are adjusted to reproduce measured PWS skin colour. Accurate reproductions of colour measurement data are produced by the program, resulting in realistic predictions of melanin and PWS blood vessel parameters. Using a modest personal computer, the simulation currently requires an average of five and a half days to complete. PMID:24142045
18. Full-dispersion Monte Carlo simulation of phonon transport in micron-sized graphene nanoribbons
SciTech Connect
Mei, S. Knezevic, I.; Maurer, L. N.; Aksamija, Z.
2014-10-28
We simulate phonon transport in suspended graphene nanoribbons (GNRs) with real-space edges and experimentally relevant widths and lengths (from submicron to hundreds of microns). The full-dispersion phonon Monte Carlo simulation technique, which we describe in detail, involves a stochastic solution to the phonon Boltzmann transport equation with the relevant scattering mechanisms (edge, three-phonon, isotope, and grain boundary scattering) while accounting for the dispersion of all three acoustic phonon branches, calculated from the fourth-nearest-neighbor dynamical matrix. We accurately reproduce the results of several experimental measurements on pure and isotopically modified samples [S. Chen et al., ACS Nano 5, 321 (2011);S. Chen et al., Nature Mater. 11, 203 (2012); X. Xu et al., Nat. Commun. 5, 3689 (2014)]. We capture the ballistic-to-diffusive crossover in wide GNRs: room-temperature thermal conductivity increases with increasing length up to roughly 100 μm, where it saturates at a value of 5800 W/m K. This finding indicates that most experiments are carried out in the quasiballistic rather than the diffusive regime, and we calculate the diffusive upper-limit thermal conductivities up to 600 K. Furthermore, we demonstrate that calculations with isotropic dispersions overestimate the GNR thermal conductivity. Zigzag GNRs have higher thermal conductivity than same-size armchair GNRs, in agreement with atomistic calculations.
19. Oxygen transport properties estimation by classical trajectory–direct simulation Monte Carlo
SciTech Connect
Bruno, Domenico; Frezzotti, Aldo Ghiroldi, Gian Pietro
2015-05-15
Coupling direct simulation Monte Carlo (DSMC) simulations with classical trajectory calculations is a powerful tool to improve predictive capabilities of computational dilute gas dynamics. The considerable increase in computational effort outlined in early applications of the method can be compensated by running simulations on massively parallel computers. In particular, Graphics Processing Unit acceleration has been found quite effective in reducing computing time of classical trajectory (CT)-DSMC simulations. The aim of the present work is to study dilute molecular oxygen flows by modeling binary collisions, in the rigid rotor approximation, through an accurate Potential Energy Surface (PES), obtained by molecular beams scattering. The PES accuracy is assessed by calculating molecular oxygen transport properties by different equilibrium and non-equilibrium CT-DSMC based simulations that provide close values of the transport properties. Comparisons with available experimental data are presented and discussed in the temperature range 300–900 K, where vibrational degrees of freedom are expected to play a limited (but not always negligible) role.
20. Experimental validation of a coupled neutron-photon inverse radiation transport solver
Mattingly, John; Mitchell, Dean J.; Harding, Lee T.
2011-10-01
Sandia National Laboratories has developed an inverse radiation transport solver that applies nonlinear regression to coupled neutron-photon deterministic transport models. The inverse solver uses nonlinear regression to fit a radiation transport model to gamma spectrometry and neutron multiplicity counting measurements. The subject of this paper is the experimental validation of that solver. This paper describes a series of experiments conducted with a 4.5 kg sphere of ?-phase, weapons-grade plutonium. The source was measured bare and reflected by high-density polyethylene (HDPE) spherical shells with total thicknesses between 1.27 and 15.24 cm. Neutron and photon emissions from the source were measured using three instruments: a gross neutron counter, a portable neutron multiplicity counter, and a high-resolution gamma spectrometer. These measurements were used as input to the inverse radiation transport solver to evaluate the solver's ability to correctly infer the configuration of the source from its measured radiation signatures.
SciTech Connect
Mattingly, John K.; Mitchell, Dean James; Harding, Lee T.
2010-08-01
Sandia National Laboratories has developed an inverse radiation transport solver that applies nonlinear regression to coupled neutron-photon deterministic transport models. The inverse solver uses nonlinear regression to fit a radiation transport model to gamma spectrometry and neutron multiplicity counting measurements. The subject of this paper is the experimental validation of that solver. This paper describes a series of experiments conducted with a 4.5 kg sphere of {alpha}-phase, weapons-grade plutonium. The source was measured bare and reflected by high-density polyethylene (HDPE) spherical shells with total thicknesses between 1.27 and 15.24 cm. Neutron and photon emissions from the source were measured using three instruments: a gross neutron counter, a portable neutron multiplicity counter, and a high-resolution gamma spectrometer. These measurements were used as input to the inverse radiation transport solver to evaluate the solver's ability to correctly infer the configuration of the source from its measured radiation signatures.
2. Suppression of population transport and control of exciton distributions by entangled photons
Schlawin, Frank; Dorfman, Konstantin E.; Fingerhut, Benjamin P.; Mukamel, Shaul
2013-04-01
Entangled photons provide an important tool for secure quantum communication, computing and lithography. Low intensity requirements for multi-photon processes make them idealy suited for minimizing damage in imaging applications. Here we show how their unique temporal and spectral features may be used in nonlinear spectroscopy to reveal properties of multiexcitons in chromophore aggregates. Simulations demostrate that they provide unique control tools for two-exciton states in the bacterial reaction centre of Blastochloris viridis. Population transport in the intermediate single-exciton manifold may be suppressed by the absorption of photon pairs with short entanglement time, thus allowing the manipulation of the distribution of two-exciton states. The quantum nature of the light is essential for achieving this degree of control, which cannot be reproduced by stochastic or chirped light. Classical light is fundamentally limited by the frequency-time uncertainty, whereas entangled photons have independent temporal and spectral characteristics not subjected to this uncertainty.
3. Development of a photon-cell interactive monte carlo simulation for non-invasive measurement of blood glucose level by Raman spectroscopy.
PubMed
Sakota, Daisuke; Kosaka, Ryo; Nishida, Masahiro; Maruyama, Osamu
2015-08-01
Turbidity variation is one of the major limitations in Raman spectroscopy for quantifying blood components, such as glucose, non-invasively. To overcome this limitation, we have developed a Raman scattering simulation using a photon-cell interactive Monte Carlo (pciMC) model that tracks photon migration in both the extra- and intracellular spaces without relying on the macroscopic scattering phase function and anisotropy factor. The interaction of photons at the plasma-cell boundary of randomly oriented three-dimensionally biconcave red blood cells (RBCs) is modeled using geometric optics. The validity of the developed pciMCRaman was investigated by comparing simulation and experimental results of Raman spectroscopy of glucose level in a bovine blood sample. The scattering of the excitation laser at a wavelength of 785 nm was simulated considering the changes in the refractive index of the extracellular solution. Based on the excitation laser photon distribution within the blood, the Raman photon derived from the hemoglobin and glucose molecule at the Raman shift of 1140 cm(-1) = 862 nm was generated, and the photons reaching the detection area were counted. The simulation and experimental results showed good correlation. It is speculated that pciMCRaman can provide information about the ability and limitations of the measurement of blood glucose level. PMID:26737759
4. Monte Carlo simulations and benchmark measurements on the response of TE(TE) and Mg(Ar) ionization chambers in photon, electron and neutron beams
Lin, Yi-Chun; Huang, Tseng-Te; Liu, Yuan-Hao; Chen, Wei-Lin; Chen, Yen-Fu; Wu, Shu-Wei; Nievaart, Sander; Jiang, Shiang-Huei
2015-06-01
The paired ionization chambers (ICs) technique is commonly employed to determine neutron and photon doses in radiology or radiotherapy neutron beams, where neutron dose shows very strong dependence on the accuracy of accompanying high energy photon dose. During the dose derivation, it is an important issue to evaluate the photon and electron response functions of two commercially available ionization chambers, denoted as TE(TE) and Mg(Ar), used in our reactor based epithermal neutron beam. Nowadays, most perturbation corrections for accurate dose determination and many treatment planning systems are based on the Monte Carlo technique. We used general purposed Monte Carlo codes, MCNP5, EGSnrc, FLUKA or GEANT4 for benchmark verifications among them and carefully measured values for a precise estimation of chamber current from absorbed dose rate of cavity gas. Also, energy dependent response functions of two chambers were calculated in a parallel beam with mono-energies from 20 keV to 20 MeV photons and electrons by using the optimal simple spherical and detailed IC models. The measurements were performed in the well-defined (a) four primary M-80, M-100, M120 and M150 X-ray calibration fields, (b) primary 60Co calibration beam, (c) 6 MV and 10 MV photon, (d) 6 MeV and 18 MeV electron LINACs in hospital and (e) BNCT clinical trials neutron beam. For the TE(TE) chamber, all codes were almost identical over the whole photon energy range. In the Mg(Ar) chamber, MCNP5 showed lower response than other codes for photon energy region below 0.1 MeV and presented similar response above 0.2 MeV (agreed within 5% in the simple spherical model). With the increase of electron energy, the response difference between MCNP5 and other codes became larger in both chambers. Compared with the measured currents, MCNP5 had the difference from the measurement data within 5% for the 60Co, 6 MV, 10 MV, 6 MeV and 18 MeV LINACs beams. But for the Mg(Ar) chamber, the derivations reached 7.8-16.5% below 120 kVp X-ray beams. In this study, we were especially interested in BNCT doses where low energy photon contribution is less to ignore, MCNP model is recognized as the most suitable to simulate wide photon-electron and neutron energy distributed responses of the paired ICs. Also, MCNP provides the best prediction of BNCT source adjustment by the detector's neutron and photon responses.
5. Event-by-event Monte Carlo simulation of radiation transport in vapor and liquid water
Papamichael, Georgios Ioannis
A Monte-Carlo Simulation is presented for Radiation Transport in water. This process is of utmost importance, having applications in oncology and therapy of cancer, in protecting people and the environment, waste management, radiation chemistry and on some solid-state detectors. It's also a phenomenon of interest in microelectronics on satellites in orbit that are subject to the solar radiation and in space-craft design for deep-space missions receiving background radiation. The interaction of charged particles with the medium is primarily due to their electromagnetic field. Three types of interaction events are considered: Elastic scattering, impact excitation and impact ionization. Secondary particles (electrons) can be generated by ionization. At each stage, along with the primary particle we explicitly follow all secondary electrons (and subsequent generations). Theoretical, semi-empirical and experimental formulae with suitable corrections have been used in each case to model the cross sections governing the quantum mechanical process of interactions, thus determining stochastically the energy and direction of outgoing particles following an event. Monte-Carlo sampling techniques have been applied to accurate probability distribution functions describing the primary particle track and all secondary particle-medium interaction. A simple account of the simulation code and a critical exposition of its underlying assumptions (often missing in the relevant literature) are also presented with reference to the model cross sections. Model predictions are in good agreement with existing computational data and experimental results. By relying heavily on a theoretical formulation, instead of merely fitting data, it is hoped that the model will be of value in a wider range of applications. Possible future directions that are the object of further research are pointed out.
6. Parallel Algorithms for Monte Carlo Particle Transport Simulation on Exascale Computing Architectures
Romano, Paul Kollath
Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with measured data from simulations in OpenMC on a full-core benchmark problem. Finally, a novel algorithm for decomposing large tally data was proposed, analyzed, and implemented/tested in OpenMC. The algorithm relies on disjoint sets of compute processes and tally servers. The analysis showed that for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead. Tests were performed on Intrepid and Titan and demonstrated that the algorithm did indeed perform well over a wide range of parameters. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)
7. A Monte Carlo evaluation of dose enhancement by cisplatin and titanocene dichloride chemotherapy drugs in brachytherapy with photon emitting sources.
PubMed
Yahya Abadi, Akram; Ghorbani, Mahdi; Mowlavi, Ali Asghar; Knaup, Courtney
2014-06-01
Some chemotherapy drugs contain a high Z element in their structure that can be used for tumour dose enhancement in radiotherapy. In the present study, dose enhancement factors (DEFs) by cisplatin and titanocene dichloride agents in brachytherapy were quantified based on Monte Carlo simulation. Six photon emitting brachytherapy sources were simulated and their dose rate constant and radial dose function were determined and compared with published data. Dose enhancement factor was obtained for 1, 3 and 5 % concentrations of cisplatin and titanocene dichloride chemotherapy agents in a tumour, in soft tissue phantom. The results of the dose rate constant and radial dose function showed good agreement with published data. Our results have shown that depending on the type of chemotherapy agent and brachytherapy source, DEF increases with increasing chemotherapy drug concentration. The maximum in-tumour averaged DEF for cisplatin and titanocene dichloride are 4.13 and 1.48, respectively, reached with 5 % concentrations of the agents, and (125)I source. Dose enhancement factor is considerably higher for both chemotherapy agents with (125)I, (103)Pd and (169)Yb sources, compared to (192)Ir, (198)Au and (60)Co sources. At similar concentrations, dose enhancement for cisplatin is higher compared with titanocene dichloride. Based on the results of this study, combination of brachytherapy and chemotherapy with agents containing a high Z element resulted in higher radiation dose to the tumour. Therefore, concurrent use of chemotherapy and brachytherapy with high atomic number drugs can have the potential benefits of dose enhancement. However, more preclinical evaluations in this area are necessary before clinical application of this method. PMID:24706342
8. Monte Carlo Neutrino Transport through Remnant Disks from Neutron Star Mergers
Richers, Sherwood; Kasen, Daniel; O'Connor, Evan; Fernández, Rodrigo; Ott, Christian D.
2015-11-01
We present Sedonu, a new open source, steady-state, special relativistic Monte Carlo (MC) neutrino transport code, available at bitbucket.org/srichers/sedonu. The code calculates the energy- and angle-dependent neutrino distribution function on fluid backgrounds of any number of spatial dimensions, calculates the rates of change of fluid internal energy and electron fraction, and solves for the equilibrium fluid temperature and electron fraction. We apply this method to snapshots from two-dimensional simulations of accretion disks left behind by binary neutron star mergers, varying the input physics and comparing to the results obtained with a leakage scheme for the cases of a central black hole and a central hypermassive neutron star. Neutrinos are guided away from the densest regions of the disk and escape preferentially around 45° from the equatorial plane. Neutrino heating is strengthened by MC transport a few scale heights above the disk midplane near the innermost stable circular orbit, potentially leading to a stronger neutrino-driven wind. Neutrino cooling in the dense midplane of the disk is stronger when using MC transport, leading to a globally higher cooling rate by a factor of a few and a larger leptonization rate by an order of magnitude. We calculate neutrino pair annihilation rates and estimate that an energy of 2.8 × 1046 erg is deposited within 45° of the symmetry axis over 300 ms when a central BH is present. Similarly, 1.9 × 1048 erg is deposited over 3 s when an HMNS sits at the center, but neither estimate is likely to be sufficient to drive a gamma-ray burst jet.
9. Anisotropy collision effect on ion transport in cold gas discharges with Monte Carlo simulation
SciTech Connect
1995-12-31
Ion-molecule collision cross sections and transport and reaction coefficients are one of the basic data needed for discharge modelling of non thermal cold plasmas. In the literature, numerous methods are devoted to the experimental and theoretical determination of these basic data. However, data on ion-molecule collision cross sections are very sparse and in certain case practically not existent for low and intermediate ion energy range. So, the aim of this communication is to give, in the case of two ions in their parent gases (N{sub 2}{sup +}/N{sub 2} and O{sub 2}{sup +}/O{sub 2}), the set of collision cross sections involving momentum transfer, symmetric charge transfer and also inelastic (vibration and ionisation) cross sections. The differential collision cross section is also given in order to take into account the strong anisotropy effect of elastic collisions of ions which are scattered mainly in the forward direction at the intermediate energy range. The differential cross sections are full calculated from interaction potential of polarization at low energy range and potentials of Lennard-Jones for N{sub 2}{sup +}/N{sub 2} and a modified form for O{sub 2}{sup +}/O{sub 2} at upper energy and then, by using a swarm unfolding technique, they are fitted until to obtain the best agreement between the transport and reaction coefficients measured from classical swarm experiments and calculated from Monte Carlo simulation of ion transport for a large range of reduced electric field E/N.
10. Design of a hybrid computational fluid dynamics-monte carlo radiation transport methodology for radioactive particulate resuspension studies.
PubMed
Ali, Fawaz; Waller, Ed
2014-10-01
11. PENGEOM-A general-purpose geometry package for Monte Carlo simulation of radiation transport in material systems defined by quadric surfaces
Almansa, Julio; Salvat-Pujol, Francesc; Díaz-Londoño, Gloria; Carnicer, Artur; Lallena, Antonio M.; Salvat, Francesc
2016-02-01
The Fortran subroutine package PENGEOM provides a complete set of tools to handle quadric geometries in Monte Carlo simulations of radiation transport. The material structure where radiation propagates is assumed to consist of homogeneous bodies limited by quadric surfaces. The PENGEOM subroutines (a subset of the PENELOPE code) track particles through the material structure, independently of the details of the physics models adopted to describe the interactions. Although these subroutines are designed for detailed simulations of photon and electron transport, where all individual interactions are simulated sequentially, they can also be used in mixed (class II) schemes for simulating the transport of high-energy charged particles, where the effect of soft interactions is described by the random-hinge method. The definition of the geometry and the details of the tracking algorithm are tailored to optimize simulation speed. The use of fuzzy quadric surfaces minimizes the impact of round-off errors. The provided software includes a Java graphical user interface for editing and debugging the geometry definition file and for visualizing the material structure. Images of the structure are generated by using the tracking subroutines and, hence, they describe the geometry actually passed to the simulation code.
12. SU-E-T-142: Effect of the Bone Heterogeneity On the Unflattened and Flattened Photon Beam Dosimetry: A Monte Carlo Comparison
SciTech Connect
Chow, J; Owrangi, A
2014-06-01
Purpose: This study compared the dependence of depth dose on bone heterogeneity of unflattened photon beams to that of flattened beams. Monte Carlo simulations (the EGSnrc-based codes) were used to calculate depth doses in phantom with a bone layer in the buildup region of the 6 MV photon beams. Methods: Heterogeneous phantom containing a bone layer of 2 cm thick at a depth of 1 cm in water was irradiated by the unflattened and flattened 6 MV photon beams (field size = 10×10 cm{sup 2}). Phase-space files of the photon beams based on the Varian TrueBeam linac were generated by the Geant4 and BEAMnrc codes, and verified by measurements. Depth doses were calculated using the DOSXYZnrc code with beam angles set to 0° and 30°. For dosimetric comparison, the above simulations were repeated in a water phantom using the same beam geometry with the bone layer replaced by water. Results: Our results showed that the beam output of unflattened photon beams was about 2.1 times larger than the flattened beams in water. Comparing the water phantom to the bone phantom, larger doses were found in water above and below the bone layer for both the unflattened and flattened photon beams. When both beams were turned 30°, the deviation of depth dose between the bone and water phantom became larger compared to that with beam angle equal to 0°. Dose ratio of the unflattened and flattened photon beams showed that the unflattened beam has larger depth dose in the buildup region compared to the flattened beam. Conclusion: Although the unflattened photon beam had different beam output and quality compared to the flattened, dose enhancements due to the bone scatter were found similar. However, we discovered that depth dose deviation due to the presence of bone was sensitive to the beam obliquity.
13. Dosimetric advantage of using 6 MV over 15 MV photons in conformal therapy of lung cancer: Monte Carlo studies in patient geometries.
PubMed
Wang, Lu; Yorke, Ellen; Desobry, Gregory; Chui, Chen-Shou
2002-01-01
Many lung cancer patients who undergo radiation therapy are treated with higher energy photons (15-18 MV) to obtain deeper penetration and better dose uniformity. However, the longer range of the higher energy recoil electrons in the low-density medium may cause lateral electronic disequilibrium and degrade the target coverage. To compare the dose homogeneity achieved with lower versus higher energy photon beams, we performed a dosimetric study of 6 and 15 MV three-dimensional (3D) conformal treatment plans for lung cancer using an accurate, patient-specific dose-calculation method based on a Monte Carlo technique. A 6 and 15 MV 3D conformal treatment plan was generated for each of two patients with target volumes exceeding 200 cm(3) on an in-house treatment planning system in routine clinical use. Each plan employed four conformally shaped photon beams. Each dose distribution was recalculated with the Monte Carlo method, utilizing the same beam geometry and patient-specific computed tomography (CT) images. Treatment plans using the two energies were compared in terms of their isodose distributions and dose-volume histograms (DVHs). The 15 MV dose distributions and DVHs generated by the clinical treatment planning calculations were as good as, or slightly better than, those generated for 6 MV beams. However, the Monte Carlo dose calculation predicted increased penumbra width with increased photon energy resulting in decreased lateral dose homogeneity for the 15 MV plans. Monte Carlo calculations showed that all target coverage indicators were significantly worse for 15 MV than for 6 MV; particularly the portion of the planning target volume (PTV) receiving at least 95% of the prescription dose (V(95)) dropped dramatically for the 15 MV plan in comparison to the 6 MV. Spinal cord and lung doses were clinically equivalent for the two energies. In treatment planning of tumors that abut lung tissue, lower energy (6 MV) photon beams should be preferred over higher energies (15-18 MV) because of the significant loss of lateral dose equilibrium for high-energy beams in the low-density medium. Any gains in radial dose uniformity across steep density gradients for higher energy beams must be weighed carefully against the lateral beam degradation due to penumbra widening. PMID:11818004
14. penMesh--Monte Carlo radiation transport simulation in a triangle mesh geometry.
PubMed
2009-12-01
We have developed a general-purpose Monte Carlo simulation code, called penMesh, that combines the accuracy of the radiation transport physics subroutines from PENELOPE and the flexibility of a geometry based on triangle meshes. While the geometric models implemented in most general-purpose codes--such as PENELOPE's quadric geometry--impose some limitations in the shape of the objects that can be simulated, triangle meshes can be used to describe any free-form (arbitrary) object. Triangle meshes are extensively used in computer-aided design and computer graphics. We took advantage of the sophisticated tools already developed in these fields, such as an octree structure and an efficient ray-triangle intersection algorithm, to significantly accelerate the triangle mesh ray-tracing. A detailed description of the new simulation code and its ray-tracing algorithm is provided in this paper. Furthermore, we show how it can be readily used in medical imaging applications thanks to the detailed anatomical phantoms already available. In particular, we present a whole body radiography simulation using a triangulated version of the anthropomorphic NCAT phantom. An example simulation of scatter fraction measurements using a standardized abdomen and lumbar spine phantom, and a benchmark of the triangle mesh and quadric geometries in the ray-tracing of a mathematical breast model, are also presented to show some of the capabilities of penMesh. PMID:19435677
15. Robust volume calculations for Constructive Solid Geometry (CSG) components in Monte Carlo transport calculations
SciTech Connect
Millman, D. L.; Griesheimer, D. P.; Nease, B. R.; Snoeyink, J.
2012-07-01
In this paper we consider a new generalized algorithm for the efficient calculation of component object volumes given their equivalent constructive solid geometry (CSG) definition. The new method relies on domain decomposition to recursively subdivide the original component into smaller pieces with volumes that can be computed analytically or stochastically, if needed. Unlike simpler brute-force approaches, the proposed decomposition scheme is guaranteed to be robust and accurate to within a user-defined tolerance. The new algorithm is also fully general and can handle any valid CSG component definition, without the need for additional input from the user. The new technique has been specifically optimized to calculate volumes of component definitions commonly found in models used for Monte Carlo particle transport simulations for criticality safety and reactor analysis applications. However, the algorithm can be easily extended to any application which uses CSG representations for component objects. The paper provides a complete description of the novel volume calculation algorithm, along with a discussion of the conjectured error bounds on volumes calculated within the method. In addition, numerical results comparing the new algorithm with a standard stochastic volume calculation algorithm are presented for a series of problems spanning a range of representative component sizes and complexities. (authors)
16. Monte Carlo simulation of radiation transport in human skin with rigorous treatment of curved tissue boundaries
Majaron, Boris; Milani?, Matija; Premru, Jan
2015-01-01
In three-dimensional (3-D) modeling of light transport in heterogeneous biological structures using the Monte Carlo (MC) approach, space is commonly discretized into optically homogeneous voxels by a rectangular spatial grid. Any round or oblique boundaries between neighboring tissues thus become serrated, which raises legitimate concerns about the realism of modeling results with regard to reflection and refraction of light on such boundaries. We analyze the related effects by systematic comparison with an augmented 3-D MC code, in which analytically defined tissue boundaries are treated in a rigorous manner. At specific locations within our test geometries, energy deposition predicted by the two models can vary by 10%. Even highly relevant integral quantities, such as linear density of the energy absorbed by modeled blood vessels, differ by up to 30%. Most notably, the values predicted by the customary model vary strongly and quite erratically with the spatial discretization step and upon minor repositioning of the computational grid. Meanwhile, the augmented model shows no such unphysical behavior. Artifacts of the former approach do not converge toward zero with ever finer spatial discretization, confirming that it suffers from inherent deficiencies due to inaccurate treatment of reflection and refraction at round tissue boundaries.
17. Proton transport in water and DNA components: A Geant4 Monte Carlo simulation
Champion, C.; Incerti, S.; Tran, H. N.; Karamitros, M.; Shin, J. I.; Lee, S. B.; Lekadir, H.; Bernal, M.; Francis, Z.; Ivanchenko, V.; Fojón, O. A.; Hanssen, J.; Rivarola, R. D.
2013-07-01
Accurate modeling of DNA damages resulting from ionizing radiation remains a challenge of today's radiobiology research. An original set of physics processes has been recently developed for modeling the detailed transport of protons and neutral hydrogen atoms in liquid water and in DNA nucleobases using the Geant4-DNA extension of the open source Geant4 Monte Carlo simulation toolkit. The theoretical cross sections as well as the mean energy transfers during the different ionizing processes were taken from recent works based on classical as well as quantum mechanical predictions. Furthermore, in order to compare energy deposition patterns in liquid water and DNA material, we here propose a simplified cellular nucleus model made of spherical voxels, each containing randomly oriented nanometer-size cylindrical targets filled with either liquid water or DNA material (DNA nucleobases) both with a density of 1 g/cm3. These cylindrical volumes have dimensions comparable to genetic material units of mammalian cells, namely, 25 nm (diameter) × 25 nm (height) for chromatin fiber segments, 10 nm (d) × 5 nm (h) for nucleosomes and 2 nm (d) × 2 nm (h) for DNA segments. Frequencies of energy deposition in the cylindrical targets are presented and discussed.
18. Monte Carlo simulation of radiation transport in human skin with rigorous treatment of curved tissue boundaries.
PubMed
Majaron, Boris; Milanič, Matija; Premru, Jan
2015-01-01
In three-dimensional (3-D) modeling of light transport in heterogeneous biological structures using the Monte Carlo (MC) approach, space is commonly discretized into optically homogeneous voxels by a rectangular spatial grid. Any round or oblique boundaries between neighboring tissues thus become serrated, which raises legitimate concerns about the realism of modeling results with regard to reflection and refraction of light on such boundaries. We analyze the related effects by systematic comparison with an augmented 3-D MC code, in which analytically defined tissue boundaries are treated in a rigorous manner. At specific locations within our test geometries, energy deposition predicted by the two models can vary by 10%. Even highly relevant integral quantities, such as linear density of the energy absorbed by modeled blood vessels, differ by up to 30%. Most notably, the values predicted by the customary model vary strongly and quite erratically with the spatial discretization step and upon minor repositioning of the computational grid. Meanwhile, the augmented model shows no such unphysical behavior. Artifacts of the former approach do not converge toward zero with ever finer spatial discretization, confirming that it suffers from inherent deficiencies due to inaccurate treatment of reflection and refraction at round tissue boundaries. PMID:25604544
19. Comparison of the Angular Dependence of Monte Carlo Particle Transport Modeling Software
Chancellor, Jeff; Guetersloh, Stephen
2011-04-01
Modeling nuclear interactions is relevant to cancer radiotherapy, space mission dosimetry and the use of heavy ion research beams. In heavy ion radiotherapy, fragmentation of the primary ions has the unwanted effect of reducing dose localization, contributing to a non-negligible dose outside the volume of tissue being treated. Fragmentation in spaceship walls, hardware and human tissue can lead to large uncertainties in estimates of radiation risk inside the crew habitat. Radiation protection mandates very conservative dose estimations, and reduction of uncertainties is critical to avoid limitations on allowed mission duration and maximize shielding design. Though fragment production as a function of scattering angle has not been well characterized, experimental simulation with Monte Carlo particle transport models have shown good agreement with data obtained from on-axis detectors with large acceptance angles. However, agreement worsens with decreasing acceptance angle, attributable in part to incorrect transverse momentum assumptions in the models. We will show there is an unacceptable angular discrepancy in modeling off-axis fragments produced by inelastic nuclear interaction of the primary ion. The results will be compared to published measurements of 400 MeV/nucleon carbon beams interacting in C, CH2, Al, Cu, Sn, and Pb targets.
20. Comparison of the Angular Dependence of Monte Carlo Particle Transport Modeling Software
Chancellor, Jeff; Guetersloh, Stephen
2011-03-01
Modeling nuclear interactions is relevant to cancer radiotherapy, space mission dosimetry and the use of heavy ion research beams. In heavy ion radiotherapy, fragmentation of the primary ions has the unwanted effect of reducing dose localization, contributing to a non-negligible dose outside the volume of tissue being treated. Fragmentation in spaceship walls, hardware and human tissue can lead to large uncertainties in estimates of radiation risk inside the crew habitat. Radiation protection mandates very conservative dose estimations, and reduction of uncertainties is critical to avoid limitations on allowed mission duration and maximize shielding design. Though fragment production as a function of scattering angle has not been well characterized, experimental simulation with Monte Carlo particle transport models have shown good agreement with data obtained from on-axis detectors with large acceptance angles. However, agreement worsens with decreasing acceptance angle, attributable in part to incorrect transverse momentum assumptions in the models. We will show there is an unacceptable angular discrepancy in modeling off-axis fragments produced by inelastic nuclear interaction of the primary ion. The results will be compared to published measurements of 400 MeV/nucleon carbon beams interacting in C, CH2, Al, Cu, Sn, and Pb targets.
1. Kinetic Monte Carlo (KMC) simulation of fission product silver transport through TRISO fuel particle
de Bellefon, G. M.; Wirth, B. D.
2011-06-01
A mesoscale kinetic Monte Carlo (KMC) model developed to investigate the diffusion of silver through the pyrolytic carbon and silicon carbide containment layers of a TRISO fuel particle is described. The release of radioactive silver from TRISO particles has been studied for nearly three decades, yet the mechanisms governing silver transport are not fully understood. This model atomically resolves Ag, but provides a mesoscale medium of carbon and silicon carbide, which can include a variety of defects including grain boundaries, reflective interfaces, cracks, and radiation-induced cavities that can either accelerate silver diffusion or slow diffusion by acting as traps for silver. The key input parameters to the model (diffusion coefficients, trap binding energies, interface characteristics) are determined from available experimental data, or parametrically varied, until more precise values become available from lower length scale modeling or experiment. The predicted results, in terms of the time/temperature dependence of silver release during post-irradiation annealing and the variability of silver release from particle to particle have been compared to available experimental data from the German HTR Fuel Program ( Gontard and Nabielek [1]) and Minato and co-workers ( Minato et al. [2]).
2. Poster — Thur Eve — 48: Dosimetric dependence on bone backscatter in orthovoltage radiotherapy: A Monte Carlo photon fluence spectral study
SciTech Connect
Chow, J; Grigor, G
2014-08-15
This study investigated dosimetric impact due to the bone backscatter in orthovoltage radiotherapy. Monte Carlo simulations were used to calculate depth doses and photon fluence spectra using the EGSnrc-based code. Inhomogeneous bone phantom containing a thin water layer (1–3 mm) on top of a bone (1 cm) to mimic the treatment sites of forehead, chest wall and kneecap was irradiated by the 220 kVp photon beam produced by the Gulmay D3225 x-ray machine. Percentage depth doses and photon energy spectra were determined using Monte Carlo simulations. Results of percentage depth doses showed that the maximum bone dose was about 210–230% larger than the surface dose in the phantoms with different water thicknesses. Surface dose was found to be increased from 2.3 to 3.5%, when the distance between the phantom surface and bone was increased from 1 to 3 mm. This increase of surface dose on top of a bone was due to the increase of photon fluence intensity, resulting from the bone backscatter in the energy range of 30 – 120 keV, when the water thickness was increased. This was also supported by the increase of the intensity of the photon energy spectral curves at the phantom and bone surface as the water thickness was increased. It is concluded that if the bone inhomogeneity during the dose prescription in the sites of forehead, chest wall and kneecap with soft tissue thickness = 1–3 mm is not considered, there would be an uncertainty in the dose delivery.
3. Improved Hybrid Monte Carlo/n-Moment Transport Equations Model for the Polar Wind
Barakat, A. R.; Ji, J.; Schunk, R. W.
2013-12-01
In many space plasma problems (e.g. terrestrial polar wind, solar wind, etc.), the plasma gradually evolves from dense collision-dominated into rarified collisionless conditions. For decades, numerous attempts were made in order to address this type of problem using simulations based on one of two approaches. These approaches are: (1) the (fluid-like) Generalized Transport Equations, GTE, and (2) the particle-based Monte Carlo (MC) techniques. In contrast to the computationally intensive MC, the GTE approach can be considerably more efficient but its validity is questionable outside the collision-dominated region depending on the number of transport parameters considered. There have been several attempts to develop hybrid models that combine the strengths of both approaches. In particular, low-order GTE formulations were applied within the collision-dominated region, while an MC simulation was applied within the collisionless region and in the collisional-to-collisionless transition region. However, attention must be paid to assuring the consistency of the two approaches in the region where they are matched. Contrary to all previous studies, our model pays special attention to the ';matching' issue, and hence eliminates the discontinuities/inaccuracies associated with mismatching. As an example, we applied our technique to the Coulomb-Milne problem because of its relevance to the problem of space plasma flow from high- to low-density regions. We will compare the velocity distribution function and its moments (density, flow velocity, temperature, etc.) from the following models: (1) the pure MC model, (2) our hybrid model, and (3) previously published hybrid models. We will also consider a wide range of the test-to-background mass ratio.
4. Consequences of removing the flattening filter from linear accelerators in generating high dose rate photon beams for clinical applications: A Monte Carlo study verified by measurement
Ishmael Parsai, E.; Pearson, David; Kvale, Thomas
2007-08-01
An Elekta SL-25 medical linear accelerator (Elekta Oncology Systems, Crawley, UK) has been modelled using Monte Carlo simulations with the photon flattening filter removed. It is hypothesized that intensity modulated radiation therapy (IMRT) treatments may be carried out after the removal of this component despite it's criticality to standard treatments. Measurements using a scanning water phantom were also performed after the flattening filter had been removed. Both simulated and measured beam profiles showed that dose on the central axis increased, with the Monte Carlo simulations showing an increase by a factor of 2.35 for 6 MV and 4.18 for 10 MV beams. A further consequence of removing the flattening filter was the softening of the photon energy spectrum leading to a steeper reduction in dose at depths greater than the depth of maximum dose. A comparison of the points at the field edge showed that dose was reduced at these points by as much as 5.8% for larger fields. In conclusion, the greater photon fluence is expected to result in shorter treatment times, while the reduction in dose outside of the treatment field is strongly suggestive of more accurate dose delivery to the target.
5. Experimental validation of a coupled neutron-photon inverse radiation transport solver.
SciTech Connect
Mattingly, John K.; Harding, Lee; Mitchell, Dean James
2010-03-01
Forward radiation transport is the problem of calculating the radiation field given a description of the radiation source and transport medium. In contrast, inverse transport is the problem of inferring the configuration of the radiation source and transport medium from measurements of the radiation field. As such, the identification and characterization of special nuclear materials (SNM) is a problem of inverse radiation transport, and numerous techniques to solve this problem have been previously developed. The authors have developed a solver based on nonlinear regression applied to deterministic coupled neutron-photon transport calculations. The subject of this paper is the experimental validation of that solver. This paper describes a series of experiments conducted with a 4.5-kg sphere of alpha-phase, weapons-grade plutonium. The source was measured in six different configurations: bare, and reflected by high-density polyethylene (HDPE) spherical shells with total thicknesses of 1.27, 2.54, 3.81, 7.62, and 15.24 cm. Neutron and photon emissions from the source were measured using three instruments: a gross neutron counter, a portable neutron multiplicity counter, and a high-resolution gamma spectrometer. These measurements were used as input to the inverse radiation transport solver to characterize the solver's ability to correctly infer the configuration of the source from its measured signatures.
6. Dependences of mucosal dose on photon beams in head-and-neck intensity-modulated radiation therapy: a Monte Carlo study
SciTech Connect
Chow, James C.L.; Owrangi, Amir M.
2012-07-01
Dependences of mucosal dose in the oral or nasal cavity on the beam energy, beam angle, multibeam configuration, and mucosal thickness were studied for small photon fields using Monte Carlo simulations (EGSnrc-based code), which were validated by measurements. Cylindrical mucosa phantoms (mucosal thickness = 1, 2, and 3 mm) with and without the bone and air inhomogeneities were irradiated by the 6- and 18-MV photon beams (field size = 1 Multiplication-Sign 1 cm{sup 2}) with gantry angles equal to 0 Degree-Sign , 90 Degree-Sign , and 180 Degree-Sign , and multibeam configurations using 2, 4, and 8 photon beams in different orientations around the phantom. Doses along the central beam axis in the mucosal tissue were calculated. The mucosal surface doses were found to decrease slightly (1% for the 6-MV photon beam and 3% for the 18-MV beam) with an increase of mucosal thickness from 1-3 mm, when the beam angle is 0 Degree-Sign . The variation of mucosal surface dose with its thickness became insignificant when the beam angle was changed to 180 Degree-Sign , but the dose at the bone-mucosa interface was found to increase (28% for the 6-MV photon beam and 20% for the 18-MV beam) with the mucosal thickness. For different multibeam configurations, the dependence of mucosal dose on its thickness became insignificant when the number of photon beams around the mucosal tissue was increased. The mucosal dose with bone was varied with the beam energy, beam angle, multibeam configuration and mucosal thickness for a small segmental photon field. These dosimetric variations are important to consider improving the treatment strategy, so the mucosal complications in head-and-neck intensity-modulated radiation therapy can be minimized.
7. Kinetic Monte Carlo Model of Charge Transport in Hematite (?-Fe2O3)
SciTech Connect
Kerisit, Sebastien N.; Rosso, Kevin M.
2007-09-28
The mobility of electrons injected into iron oxide minerals via abiotic and biotic electron-transfer processes is one of the key factors that control the reductive dissolution of such minerals. Building upon our previous work on the computational modeling of elementary electron transfer reactions in iron oxide minerals using ab initio electronic structure calculations and parameterized molecular dynamics simulations, we have developed and implemented a kinetic Monte Carlo model of charge transport in hematite that integrates previous findings. The model aims to simulate the interplay between electron transfer processes for extended periods of time in lattices of increasing complexity. The electron transfer reactions considered here involve the II/III valence interchange between nearest-neighbor iron atoms via a small polaron hopping mechanism. The temperature dependence and anisotropic behavior of the electrical conductivity as predicted by our model are in good agreement with experimental data on hematite single crystals. In addition, we characterize the effect of electron polaron concentration and that of a range of defects on the electron mobility. Interaction potentials between electron polarons and fixed defects (iron substitution by divalent, tetravalent, and isovalent ions and iron and oxygen vacancies) are determined from atomistic simulations, based on the same model used to derive the electron transfer parameters, and show little deviation from the Coulombic interaction energy. Integration of the interaction potentials in the kinetic Monte Carlo simulations allows the electron polaron diffusion coefficient and density and residence time around defect sites to be determined as a function of polaron concentration in the presence of repulsive and attractive defects. The decrease in diffusion coefficient with polaron concentration follows a logarithmic function up to the highest concentration considered, i.e., ~2% of iron(III) sites, whereas the presence of repulsive defects has a linear effect on the electron polaron diffusion. Attractive defects are found to significantly affect electron polaron diffusion at low polaron to defect ratios due to trapping on nanosecond to microsecond time scales. This work indicates that electrons can diffuse away from the initial site of interfacial electron transfer at a rate that is consistent with measured electrical conductivities but that the presence of certain kinds of defects will severely limit the mobility of donated electrons.
8. Monte Carlo neutral particle transport through a binary stochastic mixture using chord length sampling
Donovan, Timothy J.
A Monte Carlo algorithm is developed to estimate the ensemble-averaged behavior of neutral particles within a binary stochastic mixture. A special case stochastic mixture is examined, in which non-overlapping spheres of constant radius are uniformly mixed in a matrix material. Spheres are chosen to represent the stochastic volumes due to their geometric simplicity and because spheres are a common approximation to a large number of applications. The boundaries of the mixture are impenetrable, meaning that spheres in the stochastic mixture cannot be assumed to overlap the mixture boundaries. The algorithm employs a method called Limited Chord Length Sampling (LCLS). While in the matrix material, LCLS uses chord-length sampling to sample the distance to the next stochastic interface. After a surface crossing into a stochastic sphere, transport is treated explicitly until the particle exits or is killed. This capability eliminates the need to explicitly model a representation of the random geometry of the mixture. The algorithm is first proposed and tested against benchmark results for a two dimensional, fixed source model using stand-alone Monte Carlo codes. The algorithm is then implemented and tested in a test version of the Los Alamos M&barbelow;onte C&barbelow;arlo ?-p&barbelow;article Code MCNP. This prototype MCNP version has the capability to calculate LCLS results for both fixed source and multiplied source (i.e., eigenvalue) problems. Problems analyzed with MCNP range from simple binary mixtures, designed to test LCLS over a range of optical thicknesses, to a detailed High Temperature Gas Reactor fuel element, which tests the value of LCLS in a current problem of practical significance. Comparisons of LCLS and benchmark results include both accuracy and efficiency comparisons. To ensure conservative efficiency comparisons, the statistical basis for the benchmark technique is derived and a formal method for optimizing the benchmark calculations is developed. LCLS results are compared to results obtained through other methods to gauge accuracy and efficiency. The LCLS model is efficient and provides a high degree of accuracy through a wide range of conditions.
9. ITS Version 4.0: Electron/photon Monte Carlo transport codes
SciTech Connect
Halbleib, J.A,; Kensek, R.P.; Seltzer, S.M.
1995-07-01
The current publicly released version of the Integrated TIGER Series (ITS), Version 3.0, has been widely distributed both domestically and internationally, and feedback has been very positive. This feedback as well as our own experience have convinced us to upgrade the system in order to honor specific user requests for new features and to implement other new features that will improve the physical accuracy of the system and permit additional variance reduction. This presentation we will focus on components of the upgrade that (1) improve the physical model, (2) provide new and extended capabilities to the three-dimensional combinatorial-geometry (CG) of the ACCEPT codes, and (3) permit significant variance reduction in an important class of radiation effects applications.
10. Enhanced photon-assisted spin transport in a quantum dot attached to ferromagnetic leads
Souza, Fabrício M.; Carrara, Thiago L.; Vernek, E.
2011-09-01
We investigate real-time dynamics of spin-polarized current in a quantum dot coupled to ferromagnetic leads in both parallel and antiparallel alignments. While an external bias voltage is taken constant in time, a gate terminal, capacitively coupled to the quantum dot, introduces a periodic modulation of the dot level. Using nonequilibrium Green’s function technique we find that spin polarized electrons can tunnel through the system via additional photon-assisted transmission channels. Owing to a Zeeman splitting of the dot level, it is possible to select a particular spin component to be photon transferred from the left to the right terminal, with spin dependent current peaks arising at different gate frequencies. The ferromagnetic electrodes enhance or suppress the spin transport depending upon the leads magnetization alignment. The tunnel magnetoresistance also attains negative values due to a photon-assisted inversion of the spin-valve effect.
11. Transparent and Nonflammable Ionogel Photon Upconverters and Their Solute Transport Properties.
PubMed
Murakami, Yoichi; Himuro, Yuki; Ito, Toshiyuki; Morita, Ryoutarou; Niimi, Kazuki; Kiyoyanagi, Noriko
2016-02-01
Photon upconversion based on triplet-triplet annihilation (TTA-UC) is a technology to convert presently wasted sub-bandgap photons to usable higher-energy photons. In this paper, ionogel TTA-UC samples are first developed by gelatinizing ionic liquids containing triplet-sensitizing and light-emitting molecules using an ionic gelator, resulting in transparent and nonflammable ionogel photon upconverters. The photophysical properties of the ionogel samples are then investigated, and the results suggest that the effect of gelation on the diffusion of the solutes is negligibly small. To further examine this suggestion and acquire fundamental insight into the solute transport properties of the samples, the diffusion of charge-neutral solute species over much longer distances than microscopic interpolymer distances is measured by electrochemical potential-step chronoamperometry. The results reveal that the diffusion of solute species is not affected by gelation within the tested gelator concentration range, supporting our interpretation of the initial results of the photophysical investigations. Overall, our results show that the advantage of nonfluidity can be imparted to ionic-liquid-based photon upconverters without sacrificing molecular diffusion, optical transparency, and nonflammability. PMID:26752701
12. Antiproton annihilation physics in the Monte Carlo particle transport code SHIELD-HIT12A
Taasti, Vicki Trier; Knudsen, Helge; Holzscheiter, Michael H.; Sobolevsky, Nikolai; Thomsen, Bjarne; Bassler, Niels
2015-03-01
The Monte Carlo particle transport code SHIELD-HIT12A is designed to simulate therapeutic beams for cancer radiotherapy with fast ions. SHIELD-HIT12A allows creation of antiproton beam kernels for the treatment planning system TRiP98, but first it must be benchmarked against experimental data. An experimental depth dose curve obtained by the AD-4/ACE collaboration was compared with an earlier version of SHIELD-HIT, but since then inelastic annihilation cross sections for antiprotons have been updated and a more detailed geometric model of the AD-4/ACE experiment was applied. Furthermore, the Fermi-Teller Z-law, which is implemented by default in SHIELD-HIT12A has been shown not to be a good approximation for the capture probability of negative projectiles by nuclei. We investigate other theories which have been developed, and give a better agreement with experimental findings. The consequence of these updates is tested by comparing simulated data with the antiproton depth dose curve in water. It is found that the implementation of these new capture probabilities results in an overestimation of the depth dose curve in the Bragg peak. This can be mitigated by scaling the antiproton collision cross sections, which restores the agreement, but some small deviations still remain. Best agreement is achieved by using the most recent antiproton collision cross sections and the Fermi-Teller Z-law, even if experimental data conclude that the Z-law is inadequately describing annihilation on compounds. We conclude that more experimental cross section data are needed in the lower energy range in order to resolve this contradiction, ideally combined with more rigorous models for annihilation on compounds.
13. Update on the Status of the FLUKA Monte Carlo Transport Code
NASA Technical Reports Server (NTRS)
Pinsky, L.; Anderson, V.; Empl, A.; Lee, K.; Smirnov, G.; Zapp, N; Ferrari, A.; Tsoulou, K.; Roesler, S.; Vlachoudis, V.; Battisoni, G.; Ceruti, F.; Gadioli, M. V.; Garzelli, M.; Muraro, S.; Rancati, T.; Sala, P.; Ballarini, R.; Ottolenghi, A.; Parini, V.; Scannicchio, D.; Pelliccioni, M.; Wilson, T. L.
2004-01-01
The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. Here we review the progresses achieved in the last year on the physics models. From the point of view of hadronic physics, most of the effort is still in the field of nucleus--nucleus interactions. The currently available version of FLUKA already includes the internal capability to simulate inelastic nuclear interactions beginning with lab kinetic energies of 100 MeV/A up the the highest accessible energies by means of the DPMJET-II.5 event generator to handle the interactions for greater than 5 GeV/A and rQMD for energies below that. The new developments concern, at high energy, the embedding of the DPMJET-III generator, which represent a major change with respect to the DPMJET-II structure. This will also allow to achieve a better consistency between the nucleus-nucleus section with the original FLUKA model for hadron-nucleus collisions. Work is also in progress to implement a third event generator model based on the Master Boltzmann Equation approach, in order to extend the energy capability from 100 MeV/A down to the threshold for these reactions. In addition to these extended physics capabilities, structural changes to the programs input and scoring capabilities are continually being upgraded. In particular we want to mention the upgrades in the geometry packages, now capable of reaching higher levels of abstraction. Work is also proceeding to provide direct import into ROOT of the FLUKA output files for analysis and to deploy a user-friendly GUI input interface.
14. Predicting the timing properties of phosphor-coated scintillators using Monte Carlo light transport simulation.
PubMed
Roncali, Emilie; Schmall, Jeffrey P; Viswanath, Varsha; Berg, Eric; Cherry, Simon R
2014-04-21
Current developments in positron emission tomography focus on improving timing performance for scanners with time-of-flight (TOF) capability, and incorporating depth-of-interaction (DOI) information. Recent studies have shown that incorporating DOI correction in TOF detectors can improve timing resolution, and that DOI also becomes more important in long axial field-of-view scanners. We have previously reported the development of DOI-encoding detectors using phosphor-coated scintillation crystals; here we study the timing properties of those crystals to assess the feasibility of providing some level of DOI information without significantly degrading the timing performance. We used Monte Carlo simulations to provide a detailed understanding of light transport in phosphor-coated crystals which cannot be fully characterized experimentally. Our simulations used a custom reflectance model based on 3D crystal surface measurements. Lutetium oxyorthosilicate crystals were simulated with a phosphor coating in contact with the scintillator surfaces and an external diffuse reflector (teflon). Light output, energy resolution, and pulse shape showed excellent agreement with experimental data obtained on 3 3 10 mm crystals coupled to a photomultiplier tube. Scintillator intrinsic timing resolution was simulated with head-on and side-on configurations, confirming the trends observed experimentally. These results indicate that the model may be used to predict timing properties in phosphor-coated crystals and guide the coating for optimal DOI resolution/timing performance trade-off for a given crystal geometry. Simulation data suggested that a time stamp generated from early photoelectrons minimizes degradation of the timing resolution, thus making this method potentially more useful for TOF-DOI detectors than our initial experiments suggested. Finally, this approach could easily be extended to the study of timing properties in other scintillation crystals, with a range of treatments and materials attached to the surface. PMID:24694727
15. Predicting the timing properties of phosphor-coated scintillators using Monte Carlo light transport simulation
Roncali, Emilie; Schmall, Jeffrey P.; Viswanath, Varsha; Berg, Eric; Cherry, Simon R.
2014-04-01
Current developments in positron emission tomography focus on improving timing performance for scanners with time-of-flight (TOF) capability, and incorporating depth-of-interaction (DOI) information. Recent studies have shown that incorporating DOI correction in TOF detectors can improve timing resolution, and that DOI also becomes more important in long axial field-of-view scanners. We have previously reported the development of DOI-encoding detectors using phosphor-coated scintillation crystals; here we study the timing properties of those crystals to assess the feasibility of providing some level of DOI information without significantly degrading the timing performance. We used Monte Carlo simulations to provide a detailed understanding of light transport in phosphor-coated crystals which cannot be fully characterized experimentally. Our simulations used a custom reflectance model based on 3D crystal surface measurements. Lutetium oxyorthosilicate crystals were simulated with a phosphor coating in contact with the scintillator surfaces and an external diffuse reflector (teflon). Light output, energy resolution, and pulse shape showed excellent agreement with experimental data obtained on 3 3 10 mm3 crystals coupled to a photomultiplier tube. Scintillator intrinsic timing resolution was simulated with head-on and side-on configurations, confirming the trends observed experimentally. These results indicate that the model may be used to predict timing properties in phosphor-coated crystals and guide the coating for optimal DOI resolution/timing performance trade-off for a given crystal geometry. Simulation data suggested that a time stamp generated from early photoelectrons minimizes degradation of the timing resolution, thus making this method potentially more useful for TOF-DOI detectors than our initial experiments suggested. Finally, this approach could easily be extended to the study of timing properties in other scintillation crystals, with a range of treatments and materials attached to the surface.
16. Monte Carlo solution for uncertainty propagation in particle transport with a stochastic Galerkin method
SciTech Connect
Franke, B. C.; Prinja, A. K.
2013-07-01
The stochastic Galerkin method (SGM) is an intrusive technique for propagating data uncertainty in physical models. The method reduces the random model to a system of coupled deterministic equations for the moments of stochastic spectral expansions of result quantities. We investigate solving these equations using the Monte Carlo technique. We compare the efficiency with brute-force Monte Carlo evaluation of uncertainty, the non-intrusive stochastic collocation method (SCM), and an intrusive Monte Carlo implementation of the stochastic collocation method. We also describe the stability limitations of our SGM implementation. (authors)
17. Monte Carlo simulation of ion transport of the high strain ionomer with conducting powder electrodes
He, Xingxi; Leo, Donald J.
2007-04-01
The transport of charge due to electric stimulus is the primary mechanism of actuation for a class of polymeric active materials known as ionomeric polymer transducers (IPT). At low frequency, strain response is strongly related to charge accumulation at the electrodes. Experimental results demonstrated using conducting powder, such as single-walled carbon nanotubes (SWNT), polyaniline (PANI) powders, high surface area RuO II, carbon black electrodes etc. as an electrode increases the mechanical deformation of the IPT by increasing the capacitance of the material. In this paper, Monte Carlo simulation of a two-dimensional ion hopping model has been built to describe ion transport in the IPT. The shape of the conducting powder is assumed to be a sphere. A step voltage is applied between the electrodes of the IPT, causing the thermally-activated hopping between multiwell energy structures. Energy barrier height includes three parts: the energy height due to the external electric potential, intrinsic energy, and the energy height due to ion interactions. Finite element method software-ANSYS is employed to calculate the static electric potential distribution inside the material with the powder sphere in varied locations. The interaction between ions and the electrodes including powder electrodes is determined by using the method of images. At each simulation step, the energy of each cation is updated to compute ion hopping rate which directly relates to the probability of an ion moving to its neighboring site. Simulation ends when the current drops to constant zero. Periodic boundary conditions are applied when ions hop in the direction perpendicular to the external electric field. When an ion is moved out of the simulation region, its corresponding periodic replica enters from the opposite side. In the direction of the external electric field, parallel programming is achieved in C augmented with functions that perform message-passing between processors using Message Passing Interface (MPI) standard. The effects of conducting powder size, locations and amount are discussed by studying the stationary charge density plots and ion distribution plots.
18. Unidirectional transport in electronic and photonic Weyl materials by Dirac mass engineering
Bi, Ren; Wang, Zhong
2015-12-01
Unidirectional transports have been observed in two-dimensional systems, however, so far they have not been experimentally observed in three-dimensional bulk materials. In this theoretical work, we show that the recently discovered Weyl materials provide a platform for unidirectional transports inside bulk materials. With high experimental feasibility, a complex Dirac mass can be generated and manipulated in photonic Weyl crystals, creating unidirectionally propagating modes observable in transmission experiments. A possible realization in (electronic) Weyl semimetals is also studied. We show in a lattice model that, with a short-range interaction, the desired form of the Dirac mass can be spontaneously generated in a first-order transition.
19. Photonics
Hiruma, Teruo
1993-04-01
After developing various kinds of photodetectors such as phototubes, photomultiplier tubes, image pick up tubes, solid state photodetectors and a variety of light sources, we also started to develop integrated systems utilizing new detectors or imaging devices. These led us to the technology for a single photon counting imaging and detection of picosecond and femtosecond phenomena. Through those experiences, we gained the understanding that photon is a paste of substances, and yet we know so little about photon. By developing various technology for many fields such as analytical chemistry, high energy physics, medicine, biology, brain science, astronomy, etc., we are beginning to understand that the mind and life are based on the same matter, that is substance. Since humankind has so little knowledge about the substance concerning the mind and life, this makes some confusion on these subjects at this moment. If we explore photonics more deeply, many problems we now have in the world could be solved. By creating new knowledge and technology, I believe we will be able to solve the problems of illness, aging, energy, environment, human capability, and finally, the essential healthiness of the six billion human beings in the world.
20. Monte Carlo Benchmark
Energy Science and Technology Software Center (ESTSC)
2010-10-20
The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.
1. Report of the AAPM Task Group No. 105: Issues associated with clinical implementation of Monte Carlo-based photon and electron external beam treatment planning
SciTech Connect
Chetty, Indrin J.; Curran, Bruce; Cygler, Joanna E.; DeMarco, John J.; Ezzell, Gary; Faddegon, Bruce A.; Kawrakow, Iwan; Keall, Paul J.; Liu, Helen; Ma, C.-M. Charlie; Rogers, D. W. O.; Seuntjens, Jan; Sheikh-Bagheri, Daryoush; Siebers, Jeffrey V.
2007-12-15
The Monte Carlo (MC) method has been shown through many research studies to calculate accurate dose distributions for clinical radiotherapy, particularly in heterogeneous patient tissues where the effects of electron transport cannot be accurately handled with conventional, deterministic dose algorithms. Despite its proven accuracy and the potential for improved dose distributions to influence treatment outcomes, the long calculation times previously associated with MC simulation rendered this method impractical for routine clinical treatment planning. However, the development of faster codes optimized for radiotherapy calculations and improvements in computer processor technology have substantially reduced calculation times to, in some instances, within minutes on a single processor. These advances have motivated several major treatment planning system vendors to embark upon the path of MC techniques. Several commercial vendors have already released or are currently in the process of releasing MC algorithms for photon and/or electron beam treatment planning. Consequently, the accessibility and use of MC treatment planning algorithms may well become widespread in the radiotherapy community. With MC simulation, dose is computed stochastically using first principles; this method is therefore quite different from conventional dose algorithms. Issues such as statistical uncertainties, the use of variance reduction techniques, the ability to account for geometric details in the accelerator treatment head simulation, and other features, are all unique components of a MC treatment planning algorithm. Successful implementation by the clinical physicist of such a system will require an understanding of the basic principles of MC techniques. The purpose of this report, while providing education and review on the use of MC simulation in radiotherapy planning, is to set out, for both users and developers, the salient issues associated with clinical implementation and experimental verification of MC dose algorithms. As the MC method is an emerging technology, this report is not meant to be prescriptive. Rather, it is intended as a preliminary report to review the tenets of the MC method and to provide the framework upon which to build a comprehensive program for commissioning and routine quality assurance of MC-based treatment planning systems.
2. Using FLUKA Monte Carlo transport code to develop parameterizations for fluence and energy deposition data for high-energy heavy charged particles
Brittingham, John; Townsend, Lawrence; Barzilla, Janet; Lee, Kerry
2012-03-01
Monte Carlo codes provide an effective means of modeling three dimensional radiation transport; however, their use is both time- and resource-intensive. The creation of a lookup table or parameterization from Monte Carlo simulation allows users to perform calculations with Monte Carlo results without replicating lengthy calculations. FLUKA Monte Carlo transport code was used to develop lookup tables and parameterizations for data resulting from the penetration of layers of aluminum, polyethylene, and water with areal densities ranging from 0 to 100 g/cm2. Heavy charged ion radiation including ions from Z=1 to Z=26 and from 0.1 to 10 GeV/nucleon were simulated. Dose, dose equivalent, and fluence as a function of particle identity, energy, and scattering angle were examined at various depths. Calculations were compared to well-known data and the calculations of other deterministic and Monte Carlo codes. Results will be presented.
3. A generalized framework for in-line energy deposition during steady-state Monte Carlo radiation transport
SciTech Connect
Griesheimer, D. P.; Stedry, M. H.
2013-07-01
A rigorous treatment of energy deposition in a Monte Carlo transport calculation, including coupled transport of all secondary and tertiary radiations, increases the computational cost of a simulation dramatically, making fully-coupled heating impractical for many large calculations, such as 3-D analysis of nuclear reactor cores. However, in some cases, the added benefit from a full-fidelity energy-deposition treatment is negligible, especially considering the increased simulation run time. In this paper we present a generalized framework for the in-line calculation of energy deposition during steady-state Monte Carlo transport simulations. This framework gives users the ability to select among several energy-deposition approximations with varying levels of fidelity. The paper describes the computational framework, along with derivations of four energy-deposition treatments. Each treatment uses a unique set of self-consistent approximations, which ensure that energy balance is preserved over the entire problem. By providing several energy-deposition treatments, each with different approximations for neglecting the energy transport of certain secondary radiations, the proposed framework provides users the flexibility to choose between accuracy and computational efficiency. Numerical results are presented, comparing heating results among the four energy-deposition treatments for a simple reactor/compound shielding problem. The results illustrate the limitations and computational expense of each of the four energy-deposition treatments. (authors)
4. Multilevel Monte Carlo for two phase flow and Buckley–Leverett transport in random heterogeneous porous media
SciTech Connect
Müller, Florian Jenny, Patrick Meyer, Daniel W.
2013-10-01
Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley–Leverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.
5. Simultaneous enhancements in photon absorption and charge transport of bismuth vanadate photoanodes for solar water splitting.
PubMed
Kim, Tae Woo; Ping, Yuan; Galli, Giulia A; Choi, Kyoung-Shin
2015-01-01
n-Type bismuth vanadate has been identified as one of the most promising photoanodes for use in a water-splitting photoelectrochemical cell. The major limitation of BiVO4 is its relatively wide bandgap (?2.5?eV), which fundamentally limits its solar-to-hydrogen conversion efficiency. Here we show that annealing nanoporous bismuth vanadate electrodes at 350?C under nitrogen flow can result in nitrogen doping and generation of oxygen vacancies. This gentle nitrogen treatment not only effectively reduces the bandgap by ?0.2?eV but also increases the majority carrier density and mobility, enhancing electron-hole separation. The effect of nitrogen incorporation and oxygen vacancies on the electronic band structure and charge transport of bismuth vanadate are systematically elucidated by ab initio calculations. Owing to simultaneous enhancements in photon absorption and charge transport, the applied bias photon-to-current efficiency of nitrogen-treated BiVO4 for solar water splitting exceeds 2%, a record for a single oxide photon absorber, to the best of our knowledge. PMID:26498984
6. Simultaneous enhancements in photon absorption and charge transport of bismuth vanadate photoanodes for solar water splitting
PubMed Central
Kim, Tae Woo; Ping, Yuan; Galli, Giulia A.; Choi, Kyoung-Shin
2015-01-01
n-Type bismuth vanadate has been identified as one of the most promising photoanodes for use in a water-splitting photoelectrochemical cell. The major limitation of BiVO4 is its relatively wide bandgap (∼2.5 eV), which fundamentally limits its solar-to-hydrogen conversion efficiency. Here we show that annealing nanoporous bismuth vanadate electrodes at 350 °C under nitrogen flow can result in nitrogen doping and generation of oxygen vacancies. This gentle nitrogen treatment not only effectively reduces the bandgap by ∼0.2 eV but also increases the majority carrier density and mobility, enhancing electron–hole separation. The effect of nitrogen incorporation and oxygen vacancies on the electronic band structure and charge transport of bismuth vanadate are systematically elucidated by ab initio calculations. Owing to simultaneous enhancements in photon absorption and charge transport, the applied bias photon-to-current efficiency of nitrogen-treated BiVO4 for solar water splitting exceeds 2%, a record for a single oxide photon absorber, to the best of our knowledge. PMID:26498984
7. Simultaneous enhancements in photon absorption and charge transport of bismuth vanadate photoanodes for solar water splitting
Kim, Tae Woo; Ping, Yuan; Galli, Giulia A.; Choi, Kyoung-Shin
2015-10-01
n-Type bismuth vanadate has been identified as one of the most promising photoanodes for use in a water-splitting photoelectrochemical cell. The major limitation of BiVO4 is its relatively wide bandgap (~2.5 eV), which fundamentally limits its solar-to-hydrogen conversion efficiency. Here we show that annealing nanoporous bismuth vanadate electrodes at 350 C under nitrogen flow can result in nitrogen doping and generation of oxygen vacancies. This gentle nitrogen treatment not only effectively reduces the bandgap by ~0.2 eV but also increases the majority carrier density and mobility, enhancing electron-hole separation. The effect of nitrogen incorporation and oxygen vacancies on the electronic band structure and charge transport of bismuth vanadate are systematically elucidated by ab initio calculations. Owing to simultaneous enhancements in photon absorption and charge transport, the applied bias photon-to-current efficiency of nitrogen-treated BiVO4 for solar water splitting exceeds 2%, a record for a single oxide photon absorber, to the best of our knowledge.
8. Enhanced photon-assisted spin transport in a quantum dot attached to ferromagnetic leads
Souza, Fabricio M.; Carrara, Thiago L.; Vernek, Edson
2012-02-01
Time-dependent transport in quantum dot system (QDs) has received significant attention due to a variety of new quantum physical phenomena emerging in transient time scale.[1] In the present work [2] we investigate real-time dynamics of spin-polarized current in a quantum dot coupled to ferromagnetic leads in both parallel and antiparallel alignments. While an external bias voltage is taken constant in time, a gate terminal, capacitively coupled to the quantum dot, introduces a periodic modulation of the dot level. Using non equilibrium Green's function technique we find that spin polarized electrons can tunnel through the system via additional photon-assisted transmission channels. Owing to a Zeeman splitting of the dot level, it is possible to select a particular spin component to be photon-transferred from the left to the right terminal, with spin dependent current peaks arising at different gate frequencies. The ferromagnetic electrodes enhance or suppress the spin transport depending upon the leads magnetization alignment. The tunnel magnetoresistance also attains negative values due to a photon-assisted inversion of the spin-valve effect. [1] F. M. Souza, Phys. Rev. B 76, 205315 (2007). [2] F. M. Souza, T. L. Carrara, and E. Vernek, Phys. Rev. B 84, 115322 (2011).
9. Monte-Carlo-derived insights into dose-kerma-collision kerma inter-relationships for 50?keV-25?MeV photon beams in water, aluminum and copper
Kumar, Sudhir; Deshpande, Deepak D.; Nahum, Alan E.
2015-01-01
The relationships between D, K and Kcol are of fundamental importance in radiation dosimetry. These relationships are critically influenced by secondary electron transport, which makes Monte-Carlo (MC) simulation indispensable; we have used MC codes DOSRZnrc and FLURZnrc. Computations of the ratios D/K and D/Kcol in three materials (water, aluminum and copper) for large field sizes with energies from 50?keV to 25?MeV (including 6-15?MV) are presented. Beyond the depth of maximum dose D/K is almost always less than or equal to unity and D/Kcol greater than unity, and these ratios are virtually constant with increasing depth. The difference between K and Kcol increases with energy and with the atomic number of the irradiated materials. D/K in sub-equilibrium small megavoltage photon fields decreases rapidly with decreasing field size. A simple analytical expression for \\overline{X} , the distance upstream from a given voxel to the mean origin of the secondary electrons depositing their energy in this voxel, is proposed: {{\\overline{X}}\\text{emp}}? 0.5{{R}\\text{csda}}(\\overline{{{E}0}}) , where \\overline{{{E}0}} is the mean initial secondary electron energy. These {{\\overline{X}}\\text{emp}} agree well with exact MC-derived values for photon energies from 5-25?MeV for water and aluminum. An analytical expression for D/K is also presented and evaluated for 50?keV-25?MeV photons in the three materials, showing close agreement with the MC-derived values.
10. NASA astronaut dosimetry: Implementation of scalable human phantoms and benchmark comparisons of deterministic versus Monte Carlo radiation transport
11. Mercury + VisIt: Integration of a Real-Time Graphical Analysis Capability into a Monte Carlo Transport Code
SciTech Connect
O'Brien, M J; Procassini, R J; Joy, K I
2009-03-09
Validation of the problem definition and analysis of the results (tallies) produced during a Monte Carlo particle transport calculation can be a complicated, time-intensive processes. The time required for a person to create an accurate, validated combinatorial geometry (CG) or mesh-based representation of a complex problem, free of common errors such as gaps and overlapping cells, can range from days to weeks. The ability to interrogate the internal structure of a complex, three-dimensional (3-D) geometry, prior to running the transport calculation, can improve the user's confidence in the validity of the problem definition. With regard to the analysis of results, the process of extracting tally data from printed tables within a file is laborious and not an intuitive approach to understanding the results. The ability to display tally information overlaid on top of the problem geometry can decrease the time required for analysis and increase the user's understanding of the results. To this end, our team has integrated VisIt, a parallel, production-quality visualization and data analysis tool into Mercury, a massively-parallel Monte Carlo particle transport code. VisIt provides an API for real time visualization of a simulation as it is running. The user may select which plots to display from the VisIt GUI, or by sending VisIt a Python script from Mercury. The frequency at which plots are updated can be set and the user can visualize the simulation results as it is running.
12. Verification of Three Dimensional Triangular Prismatic Discrete Ordinates Transport Code ENSEMBLE-TRIZ by Comparison with Monte Carlo Code GMVP
Homma, Yuto; Moriwaki, Hiroyuki; Ohki, Shigeo; Ikeda, Kazumi
2014-06-01
This paper deals with verification of three dimensional triangular prismatic discrete ordinates transport calculation code ENSEMBLE-TRIZ by comparison with multi-group Monte Carlo calculation code GMVP in a large fast breeder reactor. The reactor is a 750 MWe electric power sodium cooled reactor. Nuclear characteristics are calculated at beginning of cycle of an initial core and at beginning and end of cycle of equilibrium core. According to the calculations, the differences between the two methodologies are smaller than 0.0002 ?k in the multi-plication factor, relatively about 1% in the control rod reactivity, and 1% in the sodium void reactivity.
13. Production and dosimetry of simultaneous therapeutic photons and electrons beam by linear accelerator: A Monte Carlo study
SciTech Connect
2015-02-24
Depending on the location and depth of tumor, the electron or photon beams might be used for treatment. Electron beam have some advantages over photon beam for treatment of shallow tumors to spare the normal tissues beyond of the tumor. In the other hand, the photon beam are used for deep targets treatment. Both of these beams have some limitations, for example the dependency of penumbra with depth, and the lack of lateral equilibrium for small electron beam fields. In first, we simulated the conventional head configuration of Varian 2300 for 16 MeV electron, and the results approved by benchmarking the Percent Depth Dose (PDD) and profile of the simulation and measurement. In the next step, a perforated Lead (Pb) sheet with 1mm thickness placed at the top of the applicator holder tray. This layer producing bremsstrahlung x-ray and a part of the electrons passing through the holes, in result, we have a simultaneous mixed electron and photon beam. For making the irradiation field uniform, a layer of steel placed after the Pb layer. The simulation was performed for 10×10, and 4×4 cm2 field size. This study was showed the advantages of mixing the electron and photon beam by reduction of pure electron's penumbra dependency with the depth, especially for small fields, also decreasing of dramatic changes of PDD curve with irradiation field size.
14. Production and dosimetry of simultaneous therapeutic photons and electrons beam by linear accelerator: A Monte Carlo study
2015-02-01
Depending on the location and depth of tumor, the electron or photon beams might be used for treatment. Electron beam have some advantages over photon beam for treatment of shallow tumors to spare the normal tissues beyond of the tumor. In the other hand, the photon beam are used for deep targets treatment. Both of these beams have some limitations, for example the dependency of penumbra with depth, and the lack of lateral equilibrium for small electron beam fields. In first, we simulated the conventional head configuration of Varian 2300 for 16 MeV electron, and the results approved by benchmarking the Percent Depth Dose (PDD) and profile of the simulation and measurement. In the next step, a perforated Lead (Pb) sheet with 1mm thickness placed at the top of the applicator holder tray. This layer producing bremsstrahlung x-ray and a part of the electrons passing through the holes, in result, we have a simultaneous mixed electron and photon beam. For making the irradiation field uniform, a layer of steel placed after the Pb layer. The simulation was performed for 10×10, and 4×4 cm2 field size. This study was showed the advantages of mixing the electron and photon beam by reduction of pure electron's penumbra dependency with the depth, especially for small fields, also decreasing of dramatic changes of PDD curve with irradiation field size.
15. Use of single scatter electron monte carlo transport for medical radiation sciences
DOEpatents
Svatos, Michelle M. (Oakland, CA)
2001-01-01
The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.
16. Thermal-to-fusion neutron convertor and Monte Carlo coupled simulation of deuteron/triton transport and secondary products generation
Wang, Guan-bo; Liu, Han-gang; Wang, Kan; Yang, Xin; Feng, Qi-jie
2012-09-01
Thermal-to-fusion neutron convertor has being studied in China Academy of Engineering Physics (CAEP). Current Monte Carlo codes, such as MCNP and GEANT, are inadequate when applied in this multi-step reactions problems. A Monte Carlo tool RSMC (Reaction Sequence Monte Carlo) has been developed to simulate such coupled problem, from neutron absorption, to charged particle ionization and secondary neutron generation. "Forced particle production" variance reduction technique has been implemented to improve the calculation speed distinctly by making deuteron/triton induced secondary product plays a major role. Nuclear data is handled from ENDF or TENDL, and stopping power from SRIM, which described better for low energy deuteron/triton interactions. As a validation, accelerator driven mono-energy 14 MeV fusion neutron source is employed, which has been deeply studied and includes deuteron transport and secondary neutron generation. Various parameters, including fusion neutron angle distribution, average neutron energy at different emission directions, differential and integral energy distributions, are calculated with our tool and traditional deterministic method as references. As a result, we present the calculation results of convertor with RSMC, including conversion ratio of 1 mm 6LiD with a typical thermal neutron (Maxwell spectrum) incidence, and fusion neutron spectrum, which will be used for our experiment.
17. Coupling of kinetic Monte Carlo simulations of surface reactions to transport in a fluid for heterogeneous catalytic reactor modeling.
PubMed
Schaefer, C; Jansen, A P J
2013-02-01
We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature. PMID:23406093
18. Coupling of kinetic Monte Carlo simulations of surface reactions to transport in a fluid for heterogeneous catalytic reactor modeling
SciTech Connect
Schaefer, C.; Jansen, A. P. J.
2013-02-07
We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.
19. High-power beam transport through a hollow-core photonic bandgap fiber.
PubMed
Jones, D C; Bennett, C R; Smith, M A; Scott, A M
2014-06-01
We investigate the use of a seven-cell hollow-core photonic bandgap fiber for transport of CW laser radiation from a single-mode, narrow-linewidth, high-power fiber laser amplifier. Over 90% of the amplifier output was coupled successfully and transmitted through the fiber in a near-Gaussian mode, with negligible backreflection into the source. 100W of power was successfully transmitted continuously without damage and 160W of power was transmitted briefly before the onset of thermal lensing in the coupling optics. PMID:24875992
20. Program EPICP: Electron photon interaction code, photon test module. Version 94.2
SciTech Connect
Cullen, D.E.
1994-09-01
The computer code EPICP performs Monte Carlo photon transport calculations in a simple one zone cylindrical detector. Results include deposition within the detector, transmission, reflection and lateral leakage from the detector, as well as events and energy deposition as a function of the depth into the detector. EPICP is part of the EPIC (Electron Photon Interaction Code) system. EPICP is designed to perform both normal transport calculations and diagnostic calculations involving only photons, with the objective of developing optimum algorithms for later use in EPIC. The EPIC system includes other modules that are designed to develop optimum algorithms for later use in EPIC; this includes electron and positron transport (EPICE), neutron transport (EPICN), charged particle transport (EPICC), geometry (EPICG), source sampling (EPICS). This is a modular system that once optimized can be linked together to consider a wide variety of particles, geometries, sources, etc. By design EPICP only considers photon transport. In particular it does not consider electron transport so that later EPICP and EPICE can be used to quantitatively evaluate the importance of electron transport when starting from photon sources. In this report I will merely mention where we expect the results to significantly differ from those obtained considering only photon transport from that obtained using coupled electron-photon transport.
NASA Technical Reports Server (NTRS)
Wasilewski, A.; Krys, E.
1985-01-01
Results of Monte-Carlo simulation of electromagnetic cascade development in lead and lead-scintillator sandwiches are analyzed. It is demonstrated that the structure function for core approximation is not applicable in the case in which the primary energy is higher than 100 GeV. The simulation data has shown that introducing an inhomogeneous chamber structure results in subsequent reduction of secondary particles.
2. Comparison of Space Radiation Calculations from Deterministic and Monte Carlo Transport Codes
NASA Technical Reports Server (NTRS)
Adams, J. H.; Lin, Z. W.; Nasser, A. F.; Randeniya, S.; Tripathi, r. K.; Watts, J. W.; Yepes, P.
2010-01-01
The presentation outline includes motivation, radiation transport codes being considered, space radiation cases being considered, results for slab geometry, results from spherical geometry, and summary. ///////// main physics in radiation transport codes hzetrn uprop fluka geant4, slab geometry, spe, gcr,
3. Monte Carlo study of coherent scattering effects of low-energy charged particle transport in Percus-Yevick liquids
Tattersall, W. J.; Cocks, D. G.; Boyle, G. J.; Buckman, S. J.; White, R. D.
2015-04-01
We generalize a simple Monte Carlo (MC) model for dilute gases to consider the transport behavior of positrons and electrons in Percus-Yevick model liquids under highly nonequilibrium conditions, accounting rigorously for coherent scattering processes. The procedure extends an existing technique [Wojcik and Tachiya, Chem. Phys. Lett. 363, 381 (2002), 10.1016/S0009-2614(02)01177-6], using the static structure factor to account for the altered anisotropy of coherent scattering in structured material. We identify the effects of the approximation used in the original method, and we develop a modified method that does not require that approximation. We also present an enhanced MC technique that has been designed to improve the accuracy and flexibility of simulations in spatially varying electric fields. All of the results are found to be in excellent agreement with an independent multiterm Boltzmann equation solution, providing benchmarks for future transport models in liquids and structured systems.
4. Monte Carlo study of coherent scattering effects of low-energy charged particle transport in Percus-Yevick liquids.
PubMed
Tattersall, W J; Cocks, D G; Boyle, G J; Buckman, S J; White, R D
2015-04-01
We generalize a simple Monte Carlo (MC) model for dilute gases to consider the transport behavior of positrons and electrons in Percus-Yevick model liquids under highly nonequilibrium conditions, accounting rigorously for coherent scattering processes. The procedure extends an existing technique [Wojcik and Tachiya, Chem. Phys. Lett. 363, 381 (2002)], using the static structure factor to account for the altered anisotropy of coherent scattering in structured material. We identify the effects of the approximation used in the original method, and we develop a modified method that does not require that approximation. We also present an enhanced MC technique that has been designed to improve the accuracy and flexibility of simulations in spatially varying electric fields. All of the results are found to be in excellent agreement with an independent multiterm Boltzmann equation solution, providing benchmarks for future transport models in liquids and structured systems. PMID:25974609
5. Modeling Positron Transport in Gaseous and Soft-condensed Systems with Kinetic Theory and Monte Carlo
Boyle, G.; Tattersall, W.; Robson, R. E.; White, Ron; Dujko, S.; Petrovic, Z. Lj.; Brunger, M. J.; Sullivan, J. P.; Buckman, S. J.; Garcia, G.
2013-09-01
An accurate quantitative understanding of the behavior of positrons in gaseous and soft-condensed systems is important for many technological applications as well as to fundamental physics research. Optimizing Positron Emission Tomography (PET) technology and understanding the associated radiation damage requires knowledge of how positrons interact with matter prior to annihilation. Modeling techniques developed for electrons can also be employed to model positrons, and these techniques can also be extended to account for the structural properties of the medium. Two complementary approaches have been implemented in the present work: kinetic theory and Monte Carlo simulations. Kinetic theory is based on the multi-term Boltzmann equation, which has recently been modified to include the positron-specific interaction processes of annihilation and positronium formation. Simultaneously, a Monte Carlo simulation code has been developed that can likewise incorporate positron-specific processes. Funding support from ARC (CoE and DP schemes).
6. Thermal Scattering Law Data: Implementation and Testing Using the Monte Carlo Neutron Transport Codes COG, MCNP and TART
SciTech Connect
Cullen, D E; Hansen, L F; Lent, E M; Plechaty, E F
2003-05-17
Recently we implemented the ENDF/B-VI thermal scattering law data in our neutron transport codes COG and TART. Our objective was to convert the existing ENDF/B data into double differential form in the Livermore ENDL format. This will allow us to use the ENDF/B data in any neutron transport code, be it a Monte Carlo, or deterministic code. This was approached as a multi-step project. The first step was to develop methods to directly use the thermal scattering law data in our Monte Carlo codes. The next step was to convert the data to double-differential form. The last step was to verify that the results obtained using the data directly are essentially the same as the results obtained using the double differential data. Part of the planned verification was intended to insure that the data as finally implemented in the COG and TART codes, gave the same answer as the well known MCNP code, which includes thermal scattering law data. Limitations in the treatment of thermal scattering law data in MCNP have been uncovered that prevented us from performing this part of our verification.
7. A multi-agent quantum Monte Carlo model for charge transport: Application to organic field-effect transistors
Bauer, Thilo; Jger, Christof M.; Jordan, Meredith J. T.; Clark, Timothy
2015-07-01
We have developed a multi-agent quantum Monte Carlo model to describe the spatial dynamics of multiple majority charge carriers during conduction of electric current in the channel of organic field-effect transistors. The charge carriers are treated by a neglect of diatomic differential overlap Hamiltonian using a lattice of hydrogen-like basis functions. The local ionization energy and local electron affinity defined previously map the bulk structure of the transistor channel to external potentials for the simulations of electron- and hole-conduction, respectively. The model is designed without a specific charge-transport mechanism like hopping- or band-transport in mind and does not arbitrarily localize charge. An electrode model allows dynamic injection and depletion of charge carriers according to source-drain voltage. The field-effect is modeled by using the source-gate voltage in a Metropolis-like acceptance criterion. Although the current cannot be calculated because the simulations have no time axis, using the number of Monte Carlo moves as pseudo-time gives results that resemble experimental I/V curves.
8. Voxel2MCNP: a framework for modeling, simulation and evaluation of radiation transport scenarios for Monte Carlo codes
Plz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S.; Harrendorf, Marco A.; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian
2013-08-01
The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPXs MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application.
9. A multi-agent quantum Monte Carlo model for charge transport: Application to organic field-effect transistors.
PubMed
Bauer, Thilo; Jäger, Christof M; Jordan, Meredith J T; Clark, Timothy
2015-07-28
We have developed a multi-agent quantum Monte Carlo model to describe the spatial dynamics of multiple majority charge carriers during conduction of electric current in the channel of organic field-effect transistors. The charge carriers are treated by a neglect of diatomic differential overlap Hamiltonian using a lattice of hydrogen-like basis functions. The local ionization energy and local electron affinity defined previously map the bulk structure of the transistor channel to external potentials for the simulations of electron- and hole-conduction, respectively. The model is designed without a specific charge-transport mechanism like hopping- or band-transport in mind and does not arbitrarily localize charge. An electrode model allows dynamic injection and depletion of charge carriers according to source-drain voltage. The field-effect is modeled by using the source-gate voltage in a Metropolis-like acceptance criterion. Although the current cannot be calculated because the simulations have no time axis, using the number of Monte Carlo moves as pseudo-time gives results that resemble experimental I/V curves. PMID:26233114
10. A Modified Treatment of Sources in Implicit Monte Carlo Radiation Transport
SciTech Connect
Gentile, N A; Trahan, T J
2011-03-22
We describe a modification of the treatment of photon sources in the IMC algorithm. We describe this modified algorithm in the context of thermal emission in an infinite medium test problem at equilibrium and show that it completely eliminates statistical noise.
11. The Development of WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs
Bergmann, Ryan
Graphics processing units, or GPUs, have gradually increased in computational power from the small, job-specific boards of the early 1990s to the programmable powerhouses of today. Compared to more common central processing units, or CPUs, GPUs have a higher aggregate memory bandwidth, much higher floating-point operations per second (FLOPS), and lower energy consumption per FLOP. Because one of the main obstacles in exascale computing is power consumption, many new supercomputing platforms are gaining much of their computational capacity by incorporating GPUs into their compute nodes. Since CPU-optimized parallel algorithms are not directly portable to GPU architectures (or at least not without losing substantial performance), transport codes need to be rewritten to execute efficiently on GPUs. Unless this is done, reactor simulations cannot take full advantage of these new supercomputers. WARP, which can stand for Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed in this work as to efficiently implement a continuous energy Monte Carlo neutron transport algorithm on a GPU. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo Method, namely, very few physical and geometrical simplifications. WARP is able to calculate multiplication factors, flux tallies, and fission source distributions for time-independent problems, and can run in both criticality or fixed source modes. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. WARP uses an event-based algorithm, but with some important differences. Moving data is expensive, so WARP uses a remapping vector of pointer/index pairs to direct GPU threads to the data they need to access. The remapping vector is sorted by reaction type after every transport iteration using a high-efficiency parallel radix sort, which serves to keep the reaction types as contiguous as possible and removes completed histories from the transport cycle. The sort reduces the amount of divergence in GPU thread blocks,'' keeps the SIMD units as full as possible, and eliminates using memory bandwidth to check if a neutron in the batch has been terminated or not. Using a remapping vector means the data access pattern is irregular, but this is mitigated by using large batch sizes where the GPU can effectively eliminate the high cost of irregular global memory access. WARP modifies the standard unionized energy grid implementation to reduce memory traffic. Instead of storing a matrix of pointers indexed by reaction type and energy, WARP stores three matrices. The first contains cross section values, the second contains pointers to angular distributions, and a third contains pointers to energy distributions. This linked list type of layout increases memory usage, but lowers the number of data loads that are needed to determine a reaction by eliminating a pointer load to find a cross section value. Optimized, high-performance GPU code libraries are also used by WARP wherever possible. The CUDA performance primitives (CUDPP) library is used to perform the parallel reductions, sorts and sums, the CURAND library is used to seed the linear congruential random number generators, and the OptiX ray tracing framework is used for geometry representation. OptiX is a highly-optimized library developed by NVIDIA that automatically builds hierarchical acceleration structures around user-input geometry so only surfaces along a ray line need to be queried in ray tracing. WARP also performs material and cell number queries with OptiX by using a point-in-polygon like algorithm. WARP has shown that GPUs are an effective platform for performing Monte Carlo neutron transport with continuous energy cross sections. Currently, WARP is the most detailed and feature-rich program in existence for performing continuous energy Monte Carlo neutron transport in general 3D geometries on GPUs, but compared to production codes like Serpent and MCNP, WARP has limited capabilities. Despite WARP's lack of features, its novel algorithm implementations show that high performance can be achieved on a GPU despite the inherently divergent program flow and sparse data access patterns. WARP is not ready for everyday nuclear reactor calculations, but is a good platform for further development of GPU-accelerated Monte Carlo neutron transport. In it's current state, it may be a useful tool for multiplication factor searches, i.e. determining reactivity coefficients by perturbing material densities or temperatures, since these types of calculations typically do not require many flux tallies. (Abstract shortened by UMI.)
12. Monte Carlo evaluation of the effect of inhomogeneities on dose calculation for low energy photons intra-operative radiation therapy in pelvic area.
PubMed
Chiavassa, Sophie; Buge, Franois; Herv, Chlo; Delpon, Gregory; Rigaud, Jrome; Lisbona, Albert; Supiot, Sthphane
2015-12-01
The aim of this study was to evaluate the effect of inhomogeneities on dose calculation for low energy photons intra-operative radiation therapy (IORT) in pelvic area. A GATE Monte Carlo model of the INTRABEAM was adapted for the study. Simulations were performed in the CT scan of a cadaver considering a homogeneous segmentation (water) and an inhomogeneous segmentation (5 tissues from ICRU44). Measurements were performed in the cadaver using EBT3 Gafchromic films. Impact of inhomogeneities on dose calculation in cadaver was 6% for soft tissues and greater than 300% for bone tissues. EBT3 measurements showed a better agreement with calculation for inhomogeneous media. However, dose discrepancy in soft tissues led to a sub-millimeter (0.65?mm) shift in the effective point dose in depth. Except for bone tissues, the effect of inhomogeneities on dose calculation for low energy photons intra-operative radiation therapy in pelvic area was not significant for the studied anatomy. PMID:26420445
13. Tests of the Monte Carlo simulation of the photon-tagger focal-plane electronics at the MAX IV Laboratory
Preston, M. F.; Myers, L. S.; Annand, J. R. M.; Fissum, K. G.; Hansen, K.; Isaksson, L.; Jebali, R.; Lundin, M.
2014-04-01
Rate-dependent effects in the electronics used to instrument the tagger focal plane at the MAX IV Laboratory were recently investigated using the novel approach of Monte Carlo simulation to allow for normalization of high-rate experimental data acquired with single-hit time-to-digital converters (TDCs). The instrumentation of the tagger focal plane has now been expanded to include multi-hit TDCs. The agreement between results obtained from data taken using single-hit and multi-hit TDCs demonstrate a thorough understanding of the behavior of the detector system.
14. Monte Carlo simulation of the IRSN CANEL/T400 realistic mixed neutron-photon radiation field.
PubMed
Lacoste, V; Gressier, V
2004-01-01
The calibration of dosemeters and spectrometers in realistic neutron fields simulating those encountered at workplaces is of high necessity to provide true and reliable dosimetric information to the exposed nuclear workers. The CANEL assembly was set-up at IRSN to produce such neutron fields. It comprises a depleted uranium shell, to produce fission neutrons, then iron and water to moderate them and a polyethylene duct. The new presented CANEL facility is used with 3.3 MeV neutrons. Calculations were performed with the MCNP4C code to characterise this mixed neutron-photon expanded radiation field at the position where calibrations are usually performed. The neutron fluence energy and the direction distributions were calculated and the operational quantities were derived from these distributions. The photon fluence and corresponding ambient dose equivalent were also estimated. Comparison with experimental results showed an overall good agreement. PMID:15353634
15. Optical photon transport in powdered-phosphor scintillators. Part II. Calculation of single-scattering transport parameters
SciTech Connect
Poludniowski, Gavin G.; Evans, Philip M.
2013-04-15
Purpose: Monte Carlo methods based on the Boltzmann transport equation (BTE) have previously been used to model light transport in powdered-phosphor scintillator screens. Physically motivated guesses or, alternatively, the complexities of Mie theory have been used by some authors to provide the necessary inputs of transport parameters. The purpose of Part II of this work is to: (i) validate predictions of modulation transform function (MTF) using the BTE and calculated values of transport parameters, against experimental data published for two Gd{sub 2}O{sub 2}S:Tb screens; (ii) investigate the impact of size-distribution and emission spectrum on Mie predictions of transport parameters; (iii) suggest simpler and novel geometrical optics-based models for these parameters and compare to the predictions of Mie theory. A computer code package called phsphr is made available that allows the MTF predictions for the screens modeled to be reproduced and novel screens to be simulated. Methods: The transport parameters of interest are the scattering efficiency (Q{sub sct}), absorption efficiency (Q{sub abs}), and the scatter anisotropy (g). Calculations of these parameters are made using the analytic method of Mie theory, for spherical grains of radii 0.1-5.0 {mu}m. The sensitivity of the transport parameters to emission wavelength is investigated using an emission spectrum representative of that of Gd{sub 2}O{sub 2}S:Tb. The impact of a grain-size distribution in the screen on the parameters is investigated using a Gaussian size-distribution ({sigma}= 1%, 5%, or 10% of mean radius). Two simple and novel alternative models to Mie theory are suggested: a geometrical optics and diffraction model (GODM) and an extension of this (GODM+). Comparisons to measured MTF are made for two commercial screens: Lanex Fast Back and Lanex Fast Front (Eastman Kodak Company, Inc.). Results: The Mie theory predictions of transport parameters were shown to be highly sensitive to both grain size and emission wavelength. For a phosphor screen structure with a distribution in grain sizes and a spectrum of emission, only the average trend of Mie theory is likely to be important. This average behavior is well predicted by the more sophisticated of the geometrical optics models (GODM+) and in approximate agreement for the simplest (GODM). The root-mean-square differences obtained between predicted MTF and experimental measurements, using all three models (GODM, GODM+, Mie), were within 0.03 for both Lanex screens in all cases. This is excellent agreement in view of the uncertainties in screen composition and optical properties. Conclusions: If Mie theory is used for calculating transport parameters for light scattering and absorption in powdered-phosphor screens, care should be taken to average out the fine-structure in the parameter predictions. However, for visible emission wavelengths ({lambda} < 1.0 {mu}m) and grain radii (a > 0.5 {mu}m), geometrical optics models for transport parameters are an alternative to Mie theory. These geometrical optics models are simpler and lead to no substantial loss in accuracy.
16. SU-E-J-09: A Monte Carlo Analysis of the Relationship Between Cherenkov Light Emission and Dose for Electrons, Protons, and X-Ray Photons
SciTech Connect
Glaser, A; Zhang, R; Gladstone, D; Pogue, B
2014-06-01
Purpose: A number of recent studies have proposed that light emitted by the Cherenkov effect may be used for a number of radiation therapy dosimetry applications. Here we investigate the fundamental nature and accuracy of the technique for the first time by using a theoretical and Monte Carlo based analysis. Methods: Using the GEANT4 architecture for medically-oriented simulations (GAMOS) and BEAMnrc for phase space file generation, the light yield, material variability, field size and energy dependence, and overall agreement between the Cherenkov light emission and dose deposition for electron, proton, and flattened, unflattened, and parallel opposed x-ray photon beams was explored. Results: Due to the exponential attenuation of x-ray photons, Cherenkov light emission and dose deposition were identical for monoenergetic pencil beams. However, polyenergetic beams exhibited errors with depth due to beam hardening, with the error being inversely related to beam energy. For finite field sizes, the error with depth was inversely proportional to field size, and lateral errors in the umbra were greater for larger field sizes. For opposed beams, the technique was most accurate due to an averaging out of beam hardening in a single beam. The technique was found to be not suitable for measuring electron beams, except for relative dosimetry of a plane at a single depth. Due to a lack of light emission, the technique was found to be unsuitable for proton beams. Conclusions: The results from this exploratory study suggest that optical dosimetry by the Cherenkov effect may be most applicable to near monoenergetic x-ray photon beams (e.g. Co-60), dynamic IMRT and VMAT plans, as well as narrow beams used for SRT and SRS. For electron beams, the technique would be best suited for superficial dosimetry, and for protons the technique is not applicable due to a lack of light emission. NIH R01CA109558 and R21EB017559.
17. Dosimetry of interface region near closed air cavities for Co-60, 6 MV and 15 MV photon beams using Monte Carlo simulations
PubMed Central
Joshi, Chandra P.; Darko, Johnson; Vidyasagar, P. B.; Schreiner, L. John
2010-01-01
18. Dosimetry of interface region near closed air cavities for Co-60, 6 MV and 15 MV photon beams using Monte Carlo simulations.
PubMed
Joshi, Chandra P; Darko, Johnson; Vidyasagar, P B; Schreiner, L John
2010-04-01
19. Influence of photon energy spectra from brachytherapy sources on Monte Carlo simulations of kerma and dose rates in water and air
SciTech Connect
Rivard, Mark J.; Granero, Domingo; Perez-Calatayud, Jose; Ballester, Facundo
2010-02-15
Purpose: For a given radionuclide, there are several photon spectrum choices available to dosimetry investigators for simulating the radiation emissions from brachytherapy sources. This study examines the dosimetric influence of selecting the spectra for {sup 192}Ir, {sup 125}I, and {sup 103}Pd on the final estimations of kerma and dose. Methods: For {sup 192}Ir, {sup 125}I, and {sup 103}Pd, the authors considered from two to five published spectra. Spherical sources approximating common brachytherapy sources were assessed. Kerma and dose results from GEANT4, MCNP5, and PENELOPE-2008 were compared for water and air. The dosimetric influence of {sup 192}Ir, {sup 125}I, and {sup 103}Pd spectral choice was determined. Results: For the spectra considered, there were no statistically significant differences between kerma or dose results based on Monte Carlo code choice when using the same spectrum. Water-kerma differences of about 2%, 2%, and 0.7% were observed due to spectrum choice for {sup 192}Ir, {sup 125}I, and {sup 103}Pd, respectively (independent of radial distance), when accounting for photon yield per Bq. Similar differences were observed for air-kerma rate. However, their ratio (as used in the dose-rate constant) did not significantly change when the various photon spectra were selected because the differences compensated each other when dividing dose rate by air-kerma strength. Conclusions: Given the standardization of radionuclide data available from the National Nuclear Data Center (NNDC) and the rigorous infrastructure for performing and maintaining the data set evaluations, NNDC spectra are suggested for brachytherapy simulations in medical physics applications.
20. Decoupling initial electron beam parameters for Monte Carlo photon beam modelling by removing beam-modifying filters from the beam path
DeSmedt, B.; Reynaert, N.; Flachet, F.; Coghe, M.; Thompson, M. G.; Paelinck, L.; Pittomvils, G.; DeWagter, C.; DeNeve, W.; Thierens, H.
2005-12-01
A new method is presented to decouple the parameters of the incident e- beam hitting the target of the linear accelerator, which consists essentially in optimizing the agreement between measurements and calculations when the difference filter, which is an additional filter inserted in the linac head to obtain uniform lateral dose-profile curves for the high energy photon beam, and flattening filter are removed from the beam path. This leads to lateral dose-profile curves, which depend only on the mean energy of the incident electron beam, since the effect of the radial intensity distribution of the incident e- beam is negligible when both filters are absent. The location of the primary collimator and the thickness and density of the target are not considered as adjustable parameters, since a satisfactory working Monte Carlo model is obtained for the low energy photon beam (6 MV) of the linac using the same target and primary collimator. This method was applied to conclude that the mean energy of the incident e- beam for the high energy photon beam (18 MV) of our Elekta SLi Plus linac is equal to 14.9 MeV. After optimizing the mean energy, the modelling of the filters, in accordance with the information provided by the manufacturer, can be verified by positioning only one filter in the linac head while the other is removed. It is also demonstrated that the parameter setting for Bremsstrahlung angular sampling in BEAMnrc ('Simple' using the leading term of the Koch and Motz equation or 'KM' using the full equation) leads to different dose-profile curves for the same incident electron energy for the studied 18 MV beam. It is therefore important to perform the calculations in 'KM' mode. Note that both filters are not physically removed from the linac head. All filters remain present in the linac head and are only rotated out of the beam. This makes the described method applicable for practical usage since no recommissioning process is required.
1. Consistent treatment of transport properties for five-species air direct simulation Monte Carlo/Navier-Stokes applications
Stephani, K. A.; Goldstein, D. B.; Varghese, P. L.
2012-07-01
A general approach for achieving consistency in the transport properties between direct simulation Monte Carlo (DSMC) and Navier-Stokes (CFD) solvers is presented for five-species air. Coefficients of species diffusion, viscosity, and thermal conductivities are considered. The transport coefficients that are modeled in CFD solvers are often obtained by expressions involving sets of collision integrals, which are obtained from more realistic intermolecular potentials (i.e., ab initio calculations). In this work, the self-consistent effective binary diffusion and Gupta et al.-Yos tranport models are considered. The DSMC transport coefficients are approximated from Chapman-Enskog theory in which the collision integrals are computed using either the variable hard sphere (VHS) and variable soft sphere (VSS) (phenomenological) collision cross section models. The VHS and VSS parameters are then used to adjust the DSMC transport coefficients in order to achieve a best-fit to the coefficients computed from more realistic intermolecular potentials over a range of temperatures. The best-fit collision model parameters are determined for both collision-averaged and collision-specific pairing approaches using the Nelder-Mead simplex algorithm. A consistent treatment of the diffusion, viscosity, and thermal conductivities is presented, and recommended sets of best-fit VHS and VSS collision model parameters are provided for a five-species air mixture.
2. Review of Hybrid (Deterministic/Monte Carlo) Radiation Transport Methods, Codes, and Applications at Oak Ridge National Laboratory
SciTech Connect
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W; Evans, Thomas M
2010-01-01
3. A Monte Carlo study of electron-hole scattering and steady-state minority-electron transport in GaAs
Sadra, K.; Maziar, C. M.; Streetman, B. G.; Tang, D. S.
1988-11-01
We report the first bipolar Monte Carlo calculations of steady-state minority-electron transport in room-temperature p-GaAs including multiband electron-hole scattering with and without hole overlap factors. Our results show how such processes, which make a significant contribution to the minority-electron energy loss rate, can affect steady-state minority-electron transport. Furthermore, we discuss several other issues which we believe should be investigated before present Monte Carlo treatments of electron-hole scattering can provide quantitative information.
4. From force-fields to photons: MD simulations of dye-labeled nucleic acids and Monte Carlo modeling of FRET
Goldner, Lori
2012-02-01
Fluorescence resonance energy transfer (FRET) is a powerful technique for understanding the structural fluctuations and transformations of RNA, DNA and proteins. Molecular dynamics (MD) simulations provide a window into the nature of these fluctuations on a different, faster, time scale. We use Monte Carlo methods to model and compare FRET data from dye-labeled RNA with what might be predicted from the MD simulation. With a few notable exceptions, the contribution of fluorophore and linker dynamics to these FRET measurements has not been investigated. We include the dynamics of the ground state dyes and linkers in our study of a 16mer double-stranded RNA. Water is included explicitly in the simulation. Cyanine dyes are attached at either the 3' or 5' ends with a 3 carbon linker, and differences in labeling schemes are discussed.[4pt] Work done in collaboration with Peker Milas, Benjamin D. Gamari, and Louis Parrot.
5. SIMULATION OF ION CONDUCTION IN ?-HEMOLYSIN NANOPORES WITH COVALENTLY ATTACHED ?-CYCLODEXTRIN BASED ON BOLTZMANN TRANSPORT MONTE CARLO MODEL
PubMed Central
Toghraee, Reza; Lee, Kyu-Il; Papke, David; Chiu, See-Wing; Jakobsson, Eric; Ravaioli, Umberto
2009-01-01
Ion channels, as natures solution to regulating biological environments, are particularly interesting to device engineers seeking to understand how natural molecular systems realize device-like functions, such as stochastic sensing of organic analytes. Whats more, attaching molecular adaptors in desired orientations inside genetically engineered ion channels, enhances the system functionality as a biosensor. In general, a hierarchy of simulation methodologies is needed to study different aspects of a biological system like ion channels. Biology Monte Carlo (BioMOCA), a three-dimensional coarse-grained particle ion channel simulator, offers a powerful and general approach to study ion channel permeation. BioMOCA is based on the Boltzmann Transport Monte Carlo (BTMC) and Particle-Particle-Particle-Mesh (P3M) methodologies developed at the University of Illinois at Urbana-Champaign. In this paper, we have employed BioMOCA to study two engineered mutations of ?-HL, namely (M113F)6(M113C-D8RL2)1-?-CD and (M113N)6(T117C-D8RL3)1-?-CD. The channel conductance calculated by BioMOCA is slightly higher than experimental values. Permanent charge distributions and the geometrical shape of the channels gives rise to selectivity towards anions and also an asymmetry in I-V curves, promoting a rectification largely for cations. PMID:20938493
6. Comparison of dose estimates using the buildup-factor method and a Baryon transport code (BRYNTRN) with Monte Carlo results
NASA Technical Reports Server (NTRS)
Shinn, Judy L.; Wilson, John W.; Nealy, John E.; Cucinotta, Francis A.
1990-01-01
Continuing efforts toward validating the buildup factor method and the BRYNTRN code, which use the deterministic approach in solving radiation transport problems and are the candidate engineering tools in space radiation shielding analyses, are presented. A simplified theory of proton buildup factors assuming no neutron coupling is derived to verify a previously chosen form for parameterizing the dose conversion factor that includes the secondary particle buildup effect. Estimates of dose in tissue made by the two deterministic approaches and the Monte Carlo method are intercompared for cases with various thicknesses of shields and various types of proton spectra. The results are found to be in reasonable agreement but with some overestimation by the buildup factor method when the effect of neutron production in the shield is significant. Future improvement to include neutron coupling in the buildup factor theory is suggested to alleviate this shortcoming. Impressive agreement for individual components of doses, such as those from the secondaries and heavy particle recoils, are obtained between BRYNTRN and Monte Carlo results.
7. Ionization chamber dosimetry of small photon fields: a Monte Carlo study on stopping-power ratios for radiosurgery and IMRT beams
Sánchez-Doblado, F.; Andreo, P.; Capote, R.; Leal, A.; Perucha, M.; Arráns, R.; Núñez, L.; Mainegra, E.; Lagares, J. I.; Carrasco, E.
2003-07-01
Absolute dosimetry with ionization chambers of the narrow photon fields used in stereotactic techniques and IMRT beamlets is constrained by lack of electron equilibrium in the radiation field. It is questionable that stopping-power ratio in dosimetry protocols, obtained for broad photon beams and quasi-electron equilibrium conditions, can be used in the dosimetry of narrow fields while keeping the uncertainty at the same level as for the broad beams used in accelerator calibrations. Monte Carlo simulations have been performed for two 6 MV clinical accelerators (Elekta SL-18 and Siemens Mevatron Primus), equipped with radiosurgery applicators and MLC. Narrow circular and Z-shaped on-axis and off-axis fields, as well as broad IMRT configured beams, have been simulated together with reference 10 × 10 cm2 beams. Phase-space data have been used to generate 3D dose distributions which have been compared satisfactorily with experimental profiles (ion chamber, diodes and film). Photon and electron spectra at various depths in water have been calculated, followed by Spencer-Attix (Delta = 10 keV) stopping-power ratio calculations which have been compared to those used in the IAEA TRS-398 code of practice. For water/air and PMMA/air stopping-power ratios, agreements within 0.1% have been obtained for the 10 × 10 cm2 fields. For radiosurgery applicators and narrow MLC beams, the calculated sw,air values agree with the reference within +/-0.3%, well within the estimated standard uncertainty of the reference stopping-power ratios (0.5%). Ionization chamber dosimetry of narrow beams at the photon qualities used in this work (6 MV) can therefore be based on stopping-power ratios data in dosimetry protocols. For a modulated 6 MV broad beam used in clinical IMRT, sw,air agrees within 0.1% with the value for 10 × 10 cm2, confirming that at low energies IMRT absolute dosimetry can also be based on data for open reference fields. At higher energies (24 MV) the difference in sw,air was up to 1.1%, indicating that the use of protocol data for narrow beams in such cases is less accurate than at low energies, and detailed calculations of the dosimetry parameters involved should be performed if similar accuracy to that of 6 MV is sought.
8. Mathematical simulations of photon interactions using Monte Carlo analysis to evaluate the uncertainty associated with in vivo K X-ray fluorescence measurements of stable lead in bone
Lodwick, Camille J.
This research utilized Monte Carlo N-Particle version 4C (MCNP4C) to simulate K X-ray fluorescent (K XRF) measurements of stable lead in bone. Simulations were performed to investigate the effects that overlying tissue thickness, bone-calcium content, and shape of the calibration standard have on detector response in XRF measurements at the human tibia. Additional simulations of a knee phantom considered uncertainty associated with rotation about the patella during XRF measurements. Simulations tallied the distribution of energy deposited in a high-purity germanium detector originating from collimated 88 keV 109Cd photons in backscatter geometry. Benchmark measurements were performed on simple and anthropometric XRF calibration phantoms of the human leg and knee developed at the University of Cincinnati with materials proven to exhibit radiological characteristics equivalent to human tissue and bone. Initial benchmark comparisons revealed that MCNP4C limits coherent scatter of photons to six inverse angstroms of momentum transfer and a Modified MCNP4C was developed to circumvent the limitation. Subsequent benchmark measurements demonstrated that Modified MCNP4C adequately models photon interactions associated with in vivo K XRF of lead in bone. Further simulations of a simple leg geometry possessing tissue thicknesses from 0 to 10 mm revealed increasing overlying tissue thickness from 5 to 10 mm reduced predicted lead concentrations an average 1.15% per 1 mm increase in tissue thickness (p < 0.0001). An anthropometric leg phantom was mathematically defined in MCNP to more accurately reflect the human form. A simulated one percent increase in calcium content (by mass) of the anthropometric leg phantom's cortical bone demonstrated to significantly reduce the K XRF normalized ratio by 4.5% (p < 0.0001). Comparison of the simple and anthropometric calibration phantoms also suggested that cylindrical calibration standards can underestimate lead content of a human leg up to 4%. The patellar bone structure in which the fluorescent photons originate was found to vary dramatically with measurement angle. The relative contribution of lead signal from the patella declined from 65% to 27% when rotated 30°. However, rotation of the source-detector about the patella from 0 to 45° demonstrated no significant effect on the net K XRF response at the knee.
9. Study of the response of a lithium yttrium borate scintillator based neutron rem counter by Monte Carlo radiation transport simulations
Sunil, C.; Tyagi, Mohit; Biju, K.; Shanbhag, A. A.; Bandyopadhyay, T.
2015-12-01
The scarcity and the high cost of 3He has spurred the use of various detectors for neutron monitoring. A new lithium yttrium borate scintillator developed in BARC has been studied for its use in a neutron rem counter. The scintillator is made of natural lithium and boron, and the yield of reaction products that will generate a signal in a real time detector has been studied by FLUKA Monte Carlo radiation transport code. A 2 cm lead introduced to enhance the gamma rejection shows no appreciable change in the shape of the fluence response or in the yield of reaction products. The fluence response when normalized at the average energy of an Am-Be neutron source shows promise of being used as rem counter.
10. Enhancements to the Combinatorial Geometry Particle Tracker in the Mercury Monte Carlo Transport Code: Embedded Meshes and Domain Decomposition
SciTech Connect
Greenman, G M; O'Brien, M J; Procassini, R J; Joy, K I
2009-03-09
Two enhancements to the combinatorial geometry (CG) particle tracker in the Mercury Monte Carlo transport code are presented. The first enhancement is a hybrid particle tracker wherein a mesh region is embedded within a CG region. This method permits efficient calculations of problems with contain both large-scale heterogeneous and homogeneous regions. The second enhancement relates to the addition of parallelism within the CG tracker via spatial domain decomposition. This permits calculations of problems with a large degree of geometric complexity, which are not possible through particle parallelism alone. In this method, the cells are decomposed across processors and a particles is communicated to an adjacent processor when it tracks to an interprocessor boundary. Applications that demonstrate the efficacy of these new methods are presented.
11. An improved empirical approach to introduce quantization effects in the transport direction in multi-subband Monte Carlo simulations
Palestri, P.; Lucci, L.; Dei Tos, S.; Esseni, D.; Selmi, L.
2010-05-01
In this paper we propose and validate a simple approach to empirically account for quantum effects in the transport direction of MOS transistors (i.e. source and drain tunneling and delocalized nature of the carrier wavepacket) in multi-subband Monte Carlo simulators, that already account for quantization in the direction normal to the semiconductor-oxide interface by solving the 1D Schrdinger equation in each section of the device. The model has been validated and calibrated against ballistic non-equilibrium Green's function simulations over a wide range of gate lengths, voltage biases and temperatures. The proposed model has just one adjustable parameter and our results show that it can achieve a good agreement with the NEGF approach.
12. Galerkin-based meshless methods for photon transport in the biological tissue.
PubMed
Qin, Chenghu; Tian, Jie; Yang, Xin; Liu, Kai; Yan, Guorui; Feng, Jinchao; Lv, Yujie; Xu, Min
2008-12-01
As an important small animal imaging technique, optical imaging has attracted increasing attention in recent years. However, the photon propagation process is extremely complicated for highly scattering property of the biological tissue. Furthermore, the light transport simulation in tissue has a significant influence on inverse source reconstruction. In this contribution, we present two Galerkin-based meshless methods (GBMM) to determine the light exitance on the surface of the diffusive tissue. The two methods are both based on moving least squares (MLS) approximation which requires only a series of nodes in the region of interest, so complicated meshing task can be avoided compared with the finite element method (FEM). Moreover, MLS shape functions are further modified to satisfy the delta function property in one method, which can simplify the processing of boundary conditions in comparison with the other. Finally, the performance of the proposed methods is demonstrated with numerical and physical phantom experiments. PMID:19065170
13. Correlated Cooper pair transport and microwave photon emission in the dynamical Coulomb blockade
Leppäkangas, Juha; Fogelström, Mikael; Marthaler, Michael; Johansson, Göran
2016-01-01
We study theoretically electromagnetic radiation emitted by inelastic Cooper-pair tunneling. We consider a dc-voltage-biased superconducting transmission line terminated by a Josephson junction. We show that the generated continuous-mode electromagnetic field can be expressed as a function of the time-dependent current across the Josephson junction. The leading-order expansion in the tunneling coupling, similar to the P (E ) theory, has previously been used to investigate the photon emission statistics in the limit of sequential (independent) Cooper-pair tunneling. By explicitly evaluating the system characteristics up to the fourth order in the tunneling coupling, we account for dynamics between consecutively tunneling Cooper pairs. Within this approach we investigate how temporal correlations in the charge transport can be seen in the first- and second-order coherences of the emitted microwave radiation.
14. Parallel FE Electron-Photon Transport Analysis on 2-D Unstructured Mesh
SciTech Connect
Drumm, C.R.; Lorenz, J.
1999-03-02
A novel solution method has been developed to solve the coupled electron-photon transport problem on an unstructured triangular mesh. Instead of tackling the first-order form of the linear Boltzmann equation, this approach is based on the second-order form in conjunction with the conventional multi-group discrete-ordinates approximation. The highly forward-peaked electron scattering is modeled with a multigroup Legendre expansion derived from the Goudsmit-Saunderson theory. The finite element method is used to treat the spatial dependence. The solution method is unique in that the space-direction dependence is solved simultaneously, eliminating the need for the conventional inner iterations, a method that is well suited for massively parallel computers.
15. Coupling of a single diamond nanocrystal to a whispering-gallery microcavity: Photon transport benefitting from Rayleigh scattering
Liu, Yong-Chun; Xiao, Yun-Feng; Li, Bei-Bei; Jiang, Xue-Feng; Li, Yan; Gong, Qihuang
2011-07-01
We study the Rayleigh scattering induced by a diamond nanocrystal in a whispering-gallery-microcavity-waveguide coupling system and find that it plays a significant role in the photon transportation. On the one hand, this study provides insight into future solid-state cavity quantum electrodynamics aimed at understanding strong-coupling physics. On the other hand, benefitting from this Rayleigh scattering, effects such as dipole-induced transparency and strong photon antibunching can occur simultaneously. As a potential application, this system can function as a high-efficiency photon turnstile. In contrast to B. Dayan [ScienceSCIEAS0036-807510.1126/science.1152261 319, 1062 (2008)], the photon turnstiles proposed here are almost immune to the nanocrystal’s azimuthal position.
16. Coupling of a single diamond nanocrystal to a whispering-gallery microcavity: Photon transport benefitting from Rayleigh scattering
SciTech Connect
Liu Yongchun; Xiao Yunfeng; Li Beibei; Jiang Xuefeng; Li Yan; Gong Qihuang
2011-07-15
We study the Rayleigh scattering induced by a diamond nanocrystal in a whispering-gallery-microcavity-waveguide coupling system and find that it plays a significant role in the photon transportation. On the one hand, this study provides insight into future solid-state cavity quantum electrodynamics aimed at understanding strong-coupling physics. On the other hand, benefitting from this Rayleigh scattering, effects such as dipole-induced transparency and strong photon antibunching can occur simultaneously. As a potential application, this system can function as a high-efficiency photon turnstile. In contrast to B. Dayan et al. [Science 319, 1062 (2008)], the photon turnstiles proposed here are almost immune to the nanocrystal's azimuthal position.
17. Radial quasiballistic transport in time-domain thermoreflectance studied using Monte Carlo simulations
SciTech Connect
Ding, D.; Chen, X.; Minnich, A. J.
2014-04-07
Recently, a pump beam size dependence of thermal conductivity was observed in Si at cryogenic temperatures using time-domain thermal reflectance (TDTR). These observations were attributed to quasiballistic phonon transport, but the interpretation of the measurements has been semi-empirical. Here, we present a numerical study of the heat conduction that occurs in the full 3D geometry of a TDTR experiment, including an interface, using the Boltzmann transport equation. We identify the radial suppression function that describes the suppression in heat flux, compared to Fourier's law, that occurs due to quasiballistic transport and demonstrate good agreement with experimental data. We also discuss unresolved discrepancies that are important topics for future study.
18. Radial quasiballistic transport in time-domain thermoreflectance studied using Monte Carlo simulations
Ding, D.; Chen, X.; Minnich, A. J.
2014-04-01
Recently, a pump beam size dependence of thermal conductivity was observed in Si at cryogenic temperatures using time-domain thermal reflectance (TDTR). These observations were attributed to quasiballistic phonon transport, but the interpretation of the measurements has been semi-empirical. Here, we present a numerical study of the heat conduction that occurs in the full 3D geometry of a TDTR experiment, including an interface, using the Boltzmann transport equation. We identify the radial suppression function that describes the suppression in heat flux, compared to Fourier's law, that occurs due to quasiballistic transport and demonstrate good agreement with experimental data. We also discuss unresolved discrepancies that are important topics for future study.
19. Neutron secondary-particle production cross sections and their incorporation into Monte-Carlo transport codes
SciTech Connect
Brenner, D.J.; Prael, R.E.; Little, R.C.
1987-01-01
Realistic simulations of the passage of fast neutrons through tissue require a large quantity of cross-sectional data. What are needed are differential (in particle type, energy and angle) cross sections. A computer code is described which produces such spectra for neutrons above approx.14 MeV incident on light nuclei such as carbon and oxygen. Comparisons have been made with experimental measurements of double-differential secondary charged-particle production on carbon and oxygen at energies from 27 to 60 MeV; they indicate that the model is adequate in this energy range. In order to utilize fully the results of these calculations, they should be incorporated into a neutron transport code. This requires defining a generalized format for describing charged-particle production, putting the calculated results in this format, interfacing the neutron transport code with these data, and charged-particle transport. The design and development of such a program is described. 13 refs., 3 figs.
20. The effect of voxel size on dose distribution in Varian Clinac iX 6 MV photon beam using Monte Carlo simulation
Yani, Sitti; Dirgayussa, I. Gde E.; Rhani, Moh. Fadhillah; Haryanto, Freddy; Arif, Idam
2015-09-01
Recently, Monte Carlo (MC) calculation method has reported as the most accurate method of predicting dose distributions in radiotherapy. The MC code system (especially DOSXYZnrc) has been used to investigate the different voxel (volume elements) sizes effect on the accuracy of dose distributions. To investigate this effect on dosimetry parameters, calculations were made with three different voxel sizes. The effects were investigated with dose distribution calculations for seven voxel sizes: 1 × 1 × 0.1 cm3, 1 × 1 × 0.5 cm3, and 1 × 1 × 0.8 cm3. The 1 × 109 histories were simulated in order to get statistical uncertainties of 2%. This simulation takes about 9-10 hours to complete. Measurements are made with field sizes 10 × 10 cm2 for the 6 MV photon beams with Gaussian intensity distribution FWHM 0.1 cm and SSD 100.1 cm. MC simulated and measured dose distributions in a water phantom. The output of this simulation i.e. the percent depth dose and dose profile in dmax from the three sets of calculations are presented and comparisons are made with the experiment data from TTSH (Tan Tock Seng Hospital, Singapore) in 0-5 cm depth. Dose that scored in voxels is a volume averaged estimate of the dose at the center of a voxel. The results in this study show that the difference between Monte Carlo simulation and experiment data depend on the voxel size both for percent depth dose (PDD) and profile dose. PDD scan on Z axis (depth) of water phantom, the big difference obtain in the voxel size 1 × 1 × 0.8 cm3 about 17%. In this study, the profile dose focused on high gradient dose area. Profile dose scan on Y axis and the big difference get in the voxel size 1 × 1 × 0.1 cm3 about 12%. This study demonstrated that the arrange voxel in Monte Carlo simulation becomes important.
1. On the dosimetric behaviour of photon dose calculation algorithms in the presence of simple geometric heterogeneities: comparison with Monte Carlo calculations
Fogliata, Antonella; Vanetti, Eugenio; Albers, Dirk; Brink, Carsten; Clivio, Alessandro; Kns, Tommy; Nicolini, Giorgia; Cozzi, Luca
2007-03-01
A comparative study was performed to reveal differences and relative figures of merit of seven different calculation algorithms for photon beams when applied to inhomogeneous media. The following algorithms were investigated: Varian Eclipse: the anisotropic analytical algorithm, and the pencil beam with modified Batho correction; Nucletron Helax-TMS: the collapsed cone and the pencil beam with equivalent path length correction; CMS XiO: the multigrid superposition and the fast Fourier transform convolution; Philips Pinnacle: the collapsed cone. Monte Carlo simulations (MC) performed with the EGSnrc codes BEAMnrc and DOSxyznrc from NRCC in Ottawa were used as a benchmark. The study was carried out in simple geometrical water phantoms (? = 1.00 g cm-3) with inserts of different densities simulating light lung tissue (? = 0.035 g cm-3), normal lung (? = 0.20 g cm-3) and cortical bone tissue (? = 1.80 g cm-3). Experiments were performed for low- and high-energy photon beams (6 and 15 MV) and for square (13 13 cm2) and elongated rectangular (2.8 13 cm2) fields. Analysis was carried out on the basis of depth dose curves and transverse profiles at several depths. Assuming the MC data as reference, ? index analysis was carried out distinguishing between regions inside the non-water inserts or inside the uniform water. For this study, a distance to agreement was set to 3 mm while the dose difference varied from 2% to 10%. In general all algorithms based on pencil-beam convolutions showed a systematic deficiency in managing the presence of heterogeneous media. In contrast, complicated patterns were observed for the advanced algorithms with significant discrepancies observed between algorithms in the lighter materials (? = 0.035 g cm-3), enhanced for the most energetic beam. For denser, and more clinical, densities a better agreement among the sophisticated algorithms with respect to MC was observed.
2. SU-E-CAMPUS-I-02: Estimation of the Dosimetric Error Caused by the Voxelization of Hybrid Computational Phantoms Using Triangle Mesh-Based Monte Carlo Transport
SciTech Connect
2014-06-15
Purpose: Computational voxel phantom provides realistic anatomy but the voxel structure may result in dosimetric error compared to real anatomy composed of perfect surface. We analyzed the dosimetric error caused from the voxel structure in hybrid computational phantoms by comparing the voxel-based doses at different resolutions with triangle mesh-based doses. Methods: We incorporated the existing adult male UF/NCI hybrid phantom in mesh format into a Monte Carlo transport code, penMesh that supports triangle meshes. We calculated energy deposition to selected organs of interest for parallel photon beams with three mono energies (0.1, 1, and 10 MeV) in antero-posterior geometry. We also calculated organ energy deposition using three voxel phantoms with different voxel resolutions (1, 5, and 10 mm) using MCNPX2.7. Results: Comparison of organ energy deposition between the two methods showed that agreement overall improved for higher voxel resolution, but for many organs the differences were small. Difference in the energy deposition for 1 MeV, for example, decreased from 11.5% to 1.7% in muscle but only from 0.6% to 0.3% in liver as voxel resolution increased from 10 mm to 1 mm. The differences were smaller at higher energies. The number of photon histories processed per second in voxels were 6.4×10{sup 4}, 3.3×10{sup 4}, and 1.3×10{sup 4}, for 10, 5, and 1 mm resolutions at 10 MeV, respectively, while meshes ran at 4.0×10{sup 4} histories/sec. Conclusion: The combination of hybrid mesh phantom and penMesh was proved to be accurate and of similar speed compared to the voxel phantom and MCNPX. The lowest voxel resolution caused a maximum dosimetric error of 12.6% at 0.1 MeV and 6.8% at 10 MeV but the error was insignificant in some organs. We will apply the tool to calculate dose to very thin layer tissues (e.g., radiosensitive layer in gastro intestines) which cannot be modeled by voxel phantoms.
3. Monte-Carlo Simulation of Bacterial Transport in a Heterogeneous Aquifer With Correlated Hydrologic and Reactive Properties
Scheibe, T. D.
2003-12-01
It has been widely observed in field experiments that the apparent rate of bacterial attachment, particularly as parameterized by the collision efficiency in filtration-based models, decreases with transport distance (i.e., exhibits scale-dependency). This effect has previously been attributed to microbial heterogeneity; that is, variability in cell-surface properties within a single monoclonal population. We demonstrate that this effect could also be interpreted as a field-scale manifestation of local-scale correlation between physical heterogeneity (hydraulic conductivity variability) and reaction heterogeneity (attachment rate coefficient variability). A field-scale model of bacterial transport developed for the South Oyster field research site located near Oyster, Virginia, and observations from field experiments performed at that site, are used as the basis for this study. Three-dimensional Monte Carlo simulations of bacterial transport were performed under four alternative scenarios: 1) homogeneous hydraulic conductivity (K) and attachment rate coefficient (Kf), 2) heterogeneous K, homogeneous Kf, 3) heterogeneous K and Kf with local correlation based on empirical and theoretical relationships, and 4) heterogeneous K and Kf without local correlation. The results of the 3D simulations were analyzed using 1D model approximations following conventional methods of field data analysis. An apparent decrease with transport distance of effective collision efficiency was observed only in the case where the local properties were both heterogeneous and correlated. This effect was observed despite the fact that the local collision efficiency was specified as a constant in the 3D model, and can therefore be interpreted as a scale effect associated with the local correlated heterogeneity as manifested at the field scale.
4. The TORT three-dimensional discrete ordinates neutron/photon transport code (TORT version 3)
SciTech Connect
1997-10-01
TORT calculates the flux or fluence of neutrons and/or photons throughout three-dimensional systems due to particles incident upon the systems external boundaries, due to fixed internal sources, or due to sources generated by interaction with the system materials. The transport process is represented by the Boltzman transport equation. The method of discrete ordinates is used to treat the directional variable, and a multigroup formulation treats the energy dependence. Anisotropic scattering is treated using a Legendre expansion. Various methods are used to treat spatial dependence, including nodal and characteristic procedures that have been especially adapted to resist numerical distortion. A method of body overlay assists in material zone specification, or the specification can be generated by an external code supplied by the user. Several special features are designed to concentrate machine resources where they are most needed. The directional quadrature and Legendre expansion can vary with energy group. A discontinuous mesh capability has been shown to reduce the size of large problems by a factor of roughly three in some cases. The emphasis in this code is a robust, adaptable application of time-tested methods, together with a few well-tested extensions.
5. Voxel2MCNP: software for handling voxel models for Monte Carlo radiation transport calculations.
PubMed
Hegenbart, Lars; Plz, Stefan; Benzler, Andreas; Urban, Manfred
2012-02-01
Voxel2MCNP is a program that sets up radiation protection scenarios with voxel models and generates corresponding input files for the Monte Carlo code MCNPX. Its technology is based on object-oriented programming, and the development is platform-independent. It has a user-friendly graphical interface including a two- and three-dimensional viewer. A row of equipment models is implemented in the program. Various voxel model file formats are supported. Applications include calculation of counting efficiency of in vivo measurement scenarios and calculation of dose coefficients for internal and external radiation scenarios. Moreover, anthropometric parameters of voxel models, for instance chest wall thickness, can be determined. Voxel2MCNP offers several methods for voxel model manipulations including image registration techniques. The authors demonstrate the validity of the program results and provide references for previous successful implementations. The authors illustrate the reliability of calculated dose conversion factors and specific absorbed fractions. Voxel2MCNP is used on a regular basis to generate virtual radiation protection scenarios at Karlsruhe Institute of Technology while further improvements and developments are ongoing. PMID:22217596
6. Monte Carlo study of alpha (?) particles transport in nanoscale gallium arsenide semiconductor materials
Amir, Haider F. Abdul; Chee, Fuei Pien
2012-09-01
Space and ground level electronic equipment with semiconductor devices are always subjected to the deleterious effects by radiation. The study of ion-solid interaction can show the radiation effects of scattering and stopping of high speed atomic particles when passing through matter. This study had been of theoretical interest and of practical important in these recent years, driven by the need to control material properties at nanoscale. This paper is attempted to present the calculations of final 3D distribution of the ions and all kinetic phenomena associated with the ion's energy loss: target damage, sputtering, ionization, and phonon production of alpha (?) particle in Gallium Arsenide(GaAs) material. This calculation is being simulated using the Monte Carlo simulation, SRIM (Stopping and Range of Ions in Matter). The comparison of radiation tolerance between the conventional scale and nanoscale GaAs layer will be discussed as well. From the findings, it is observed that most of the damage formed in the GaAs layer induced by the production of lattice defects in the form of vacancies, defect clusters and dislocations. However, when the GaAs layer is scaled down (nanoscaling), it is found that the GaAs layer can withstand higher radiation energy, in term of displacement damage.
7. High-resolution monte carlo simulation of flow and conservative transport in heterogeneous porous media 1. Methodology and flow results
USGS Publications Warehouse
Naff, R.L.; Haley, D.F.; Sudicky, E.A.
1998-01-01
In this, the first of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, Various aspects of the modelling effort are examined. In particular, the need to save on core memory causes one to use only specific realizations that have certain initial characteristics; in effect, these transport simulations are conditioned by these characteristics. Also, the need to independently estimate length Scales for the generated fields is discussed. The statistical uniformity of the flow field is investigated by plotting the variance of the seepage velocity for vector components in the x, y, and z directions. Finally, specific features of the velocity field itself are illuminated in this first paper. In particular, these data give one the opportunity to investigate the effective hydraulic conductivity in a flow field which is approximately statistically uniform; comparisons are made with first- and second-order perturbation analyses. The mean cloud velocity is examined to ascertain whether it is identical to the mean seepage velocity of the model. Finally, the variance in the cloud centroid velocity is examined for the effect of source size and differing strengths of local transverse dispersion.
8. Measurements of photon and neutron leakage from medical linear accelerators and Monte Carlo simulation of tenth value layers of concrete used for intensity modulated radiation therapy treatment
The x ray leakage from the housing of a therapy x ray source is regulated to be <0.1% of the useful beam exposure at a distance of 1 m from the source. The x ray leakage in the backward direction has been measured from linacs operating at 4, 6, 10, 15, and 18 MV using a 100 cm3 ionization chamber and track-etch detectors. The leakage was measured at nine different positions over the rear wall using a 3 x 3 matrix with a 1 m separation between adjacent positions. In general, the leakage was less than the canonical value, but the exact value depends on energy, gantry angle, and measurement position. Leakage at 10 MV for some positions exceeded 0.1%. Electrons with energy greater than about 9 MeV have the ability to produce neutrons. Neutron leakage has been measured around the head of electron accelerators at a distance 1 m from the target at 0, 46, 90, 135, and 180 azimuthal angles; for electron energies of 9, 12, 15, 16, 18, and 20 MeV and 10, 15, and 18 MV x ray photon beam, using a neutron bubble detector of type BD-PND and using Track-Etch detectors. The highest neutron dose equivalent per unit electron dose was at 0 for all electron energies. The neutron leakage from photon beams was the highest between all the machines. Intensity modulated radiation therapy (IMRT) delivery consists of a summation of small beamlets having different weights that make up each field. A linear accelerator room designed exclusively for IMRT use would require different, probably lower, tenth value layers (TVL) for determining the required wall thicknesses for the primary barriers. The first, second, and third TVL of 60Co gamma rays and photons from 4, 6, 10, 15, and 18 MV x ray beams by concrete have been determined and modeled using a Monte Carlo technique (MCNP version 4C2) for cone beams of half-opening angles of 0, 3, 6, 9, 12, and 14.
9. Assessment of uncertainties in the lung activity measurement of low-energy photon emitters using Monte Carlo simulation of ICRP male thorax voxel phantom.
PubMed
Nadar, M Y; Akar, D K; Rao, D D; Kulkarni, M S; Pradeepkumar, K S
2015-12-01
Assessment of intake due to long-lived actinides by inhalation pathway is carried out by lung monitoring of the radiation workers inside totally shielded steel room using sensitive detection systems such as Phoswich and an array of HPGe detectors. In this paper, uncertainties in the lung activity estimation due to positional errors, chest wall thickness (CWT) and detector background variation are evaluated. First, calibration factors (CFs) of Phoswich and an array of three HPGe detectors are estimated by incorporating ICRP male thorax voxel phantom and detectors in Monte Carlo code 'FLUKA'. CFs are estimated for the uniform source distribution in lungs of the phantom for various photon energies. The variation in the CFs for positional errors of 0.5, 1 and 1.5 cm in horizontal and vertical direction along the chest are studied. The positional errors are also evaluated by resizing the voxel phantom. Combined uncertainties are estimated at different energies using the uncertainties due to CWT, detector positioning, detector background variation of an uncontaminated adult person and counting statistics in the form of scattering factors (SFs). SFs are found to decrease with increase in energy. With HPGe array, highest SF of 1.84 is found at 18 keV. It reduces to 1.36 at 238 keV. PMID:25468992
10. Monte carlo study of the effect of collimator thickness on T-99m source response in single photon emission computed tomography.
PubMed
2012-05-01
In single photon emission computed tomography (SPECT), the collimator is a crucial element of the imaging chain and controls the noise resolution tradeoff of the collected data. The current study is an evaluation of the effects of different thicknesses of a low-energy high-resolution (LEHR) collimator on tomographic spatial resolution in SPECT. In the present study, the SIMIND Monte Carlo program was used to simulate a SPECT equipped with an LEHR collimator. A point source of (99m)Tc and an acrylic cylindrical Jaszczak phantom, with cold spheres and rods, and a human anthropomorphic torso phantom (4D-NCAT phantom) were used. Simulated planar images and reconstructed tomographic images were evaluated both qualitatively and quantitatively. According to the tabulated calculated detector parameters, contribution of Compton scattering, photoelectric reactions, and also peak to Compton (P/C) area in the obtained energy spectrums (from scanning of the sources with 11 collimator thicknesses, ranging from 2.400 to 2.410 cm), we concluded the thickness of 2.405 cm as the proper LEHR parallel hole collimator thickness. The image quality analyses by structural similarity index (SSIM) algorithm and also by visual inspection showed suitable quality images obtained with a collimator thickness of 2.405 cm. There was a suitable quality and also performance parameters' analysis results for the projections and reconstructed images prepared with a 2.405 cm LEHR collimator thickness compared with the other collimator thicknesses. PMID:23372440
11. A Monte Carlo Code for Relativistic Radiation Transport Around Kerr Black Holes
NASA Technical Reports Server (NTRS)
Schnittman, Jeremy David; Krolik, Julian H.
2013-01-01
We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.
12. A MONTE CARLO CODE FOR RELATIVISTIC RADIATION TRANSPORT AROUND KERR BLACK HOLES
SciTech Connect
Schnittman, Jeremy D.; Krolik, Julian H. E-mail: [email protected]
2013-11-01
We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.
13. Transport map-accelerated Markov chain Monte Carlo for Bayesian parameter inference
Marzouk, Y.; Parno, M.
2014-12-01
We introduce a new framework for efficient posterior sampling in Bayesian inference, using a combination of optimal transport maps and the Metropolis-Hastings rule. The core idea is to use transport maps to transform typical Metropolis proposal mechanisms (e.g., random walks, Langevin methods, Hessian-preconditioned Langevin methods) into non-Gaussian proposal distributions that can more effectively explore the target density. Our approach adaptively constructs a lower triangular transport map—i.e., a Knothe-Rosenblatt re-arrangement—using information from previous MCMC states, via the solution of an optimization problem. Crucially, this optimization problem is convex regardless of the form of the target distribution. It is solved efficiently using Newton or quasi-Newton methods, but the formulation is such that these methods require no derivative information from the target probability distribution; the target distribution is instead represented via samples. Sequential updates using the alternating direction method of multipliers enable efficient and parallelizable adaptation of the map even for large numbers of samples. We show that this approach uses inexact or truncated maps to produce an adaptive MCMC algorithm that is ergodic for the exact target distribution. Numerical demonstrations on a range of parameter inference problems involving both ordinary and partial differential equations show multiple order-of-magnitude speedups over standard MCMC techniques, measured by the number of effectively independent samples produced per model evaluation and per unit of wallclock time.
14. Comparison of Two Accelerators for Monte Carlo Radiation Transport Calculations, NVIDIA Tesla M2090 GPU and Intel Xeon Phi 5110p Coprocessor: A Case Study for X-ray CT Imaging Dose Calculation
Liu, Tianyu; Xu, X. George; Carothers, Christopher D.
2014-06-01
Hardware accelerators are currently becoming increasingly important in boosting high performance computing sys- tems. In this study, we tested the performance of two accelerator models, NVIDIA Tesla M2090 GPU and Intel Xeon Phi 5110p coprocessor, using a new Monte Carlo photon transport package called ARCHER-CT we have developed for fast CT imaging dose calculation. The package contains three code variants, ARCHER - CTCPU, ARCHER - CTGPU and ARCHER - CTCOP to run in parallel on the multi-core CPU, GPU and coprocessor architectures respectively. A detailed GE LightSpeed Multi-Detector Computed Tomography (MDCT) scanner model and a family of voxel patient phantoms were included in the code to calculate absorbed dose to radiosensitive organs under specified scan protocols. The results from ARCHER agreed well with those from the production code Monte Carlo N-Particle eXtended (MCNPX). It was found that all the code variants were significantly faster than the parallel MCNPX running on 12 MPI processes, and that the GPU and coprocessor performed equally well, being 2.89~4.49 and 3.01~3.23 times faster than the parallel ARCHER - CTCPU running with 12 hyperthreads.
15. Numerical modeling of photon migration in the cerebral cortex of the living rat using the radiative transport equation
2015-03-01
Accurate modeling and efficient calculation of photon migration in biological tissues is requested for determination of the optical properties of living tissues by in vivo experiments. This study develops a calculation scheme of photon migration for determination of the optical properties of the rat cerebral cortex (ca 0.2 cm thick) based on the three-dimensional time-dependent radiative transport equation assuming a homogeneous object. It is shown that the time-resolved profiles calculated by the developed scheme agree with the profiles measured by in vivo experiments using near infrared light. Also, an efficient calculation method is tested using the delta-Eddington approximation of the scattering phase function.
16. Controlling resonant photonic transport along optical waveguides by two-level atoms
SciTech Connect
Yan Conghua; Wei Lianfu; Jia Wenzhi; Shen, Jung-Tsung
2011-10-15
Recent works [Shen et al., Phys. Rev. Lett. 95, 213001 (2005); Zhou et al., Phys. Rev. Lett. 101, 100501 (2008)] showed that the incident photons cannot transmit along an optical waveguide containing a resonant two-level atom (TLA). Here we propose an approach to overcome such a difficulty by using asymmetric couplings between the photons and a TLA. Our numerical results show that the transmission spectrum of the photon depends on both the frequency of the incident photons and the photon-TLA couplings. Consequently, this system can serve as a controllable photon attenuator, by which the transmission probability of the resonantly incident photons can be changed from 0% to 100%. A possible application to explain the recent experimental observations [Astafiev et al., Science 327, 840 (2010)] is also discussed.
17. Monte Carlo portal dosimetry
SciTech Connect
Chin, P.W. . E-mail: [email protected]
2005-10-15
This project developed a solution for verifying external photon beam radiotherapy. The solution is based on a calibration chain for deriving portal dose maps from acquired portal images, and a calculation framework for predicting portal dose maps. Quantitative comparison between acquired and predicted portal dose maps accomplishes both geometric (patient positioning with respect to the beam) and dosimetric (two-dimensional fluence distribution of the beam) verifications. A disagreement would indicate that beam delivery had not been according to plan. The solution addresses the clinical need for verifying radiotherapy both pretreatment (without the patient in the beam) and on treatment (with the patient in the beam). Medical linear accelerators mounted with electronic portal imaging devices (EPIDs) were used to acquire portal images. Two types of EPIDs were investigated: the amorphous silicon (a-Si) and the scanning liquid ion chamber (SLIC). The EGSnrc family of Monte Carlo codes were used to predict portal dose maps by computer simulation of radiation transport in the beam-phantom-EPID configuration. Monte Carlo simulations have been implemented on several levels of high throughput computing (HTC), including the grid, to reduce computation time. The solution has been tested across the entire clinical range of gantry angle, beam size (5 cmx5 cm to 20 cmx20 cm), and beam-patient and patient-EPID separations (4 to 38 cm). In these tests of known beam-phantom-EPID configurations, agreement between acquired and predicted portal dose profiles was consistently within 2% of the central axis value. This Monte Carlo portal dosimetry solution therefore achieved combined versatility, accuracy, and speed not readily achievable by other techniques.
18. Epidermal photonic devices for quantitative imaging of temperature and thermal transport characteristics of the skin.
PubMed
Gao, Li; Zhang, Yihui; Malyarchuk, Viktor; Jia, Lin; Jang, Kyung-In; Webb, R Chad; Fu, Haoran; Shi, Yan; Zhou, Guoyan; Shi, Luke; Shah, Deesha; Huang, Xian; Xu, Baoxing; Yu, Cunjiang; Huang, Yonggang; Rogers, John A
2014-01-01
Characterization of temperature and thermal transport properties of the skin can yield important information of relevance to both clinical medicine and basic research in skin physiology. Here we introduce an ultrathin, compliant skin-like, or 'epidermal', photonic device that combines colorimetric temperature indicators with wireless stretchable electronics for thermal measurements when softly laminated on the skin surface. The sensors exploit thermochromic liquid crystals patterned into large-scale, pixelated arrays on thin elastomeric substrates; the electronics provide means for controlled, local heating by radio frequency signals. Algorithms for extracting patterns of colour recorded from these devices with a digital camera and computational tools for relating the results to underlying thermal processes near the skin surface lend quantitative value to the resulting data. Application examples include non-invasive spatial mapping of skin temperature with milli-Kelvin precision (±50 mK) and sub-millimetre spatial resolution. Demonstrations in reactive hyperaemia assessments of blood flow and hydration analysis establish relevance to cardiovascular health and skin care, respectively. PMID:25234839
19. Design studies of volume-pumped photolytic systems using a photon transport code
Prelas, M. A.; Jones, G. L.
1982-01-01
The use of volume sources, such as nuclear pumping, presents some unique features in the design of photolytically driven systems (e.g., lasers). In systems such as these, for example, a large power deposition is not necessary. However, certain restrictions, such as self-absorption, limit the ability of photolytically driven systems to scale by volume. A photon transport computer program was developed at the University of Missouri-Columbia to study these limitations. The development of this code is important, perhaps necessary, for the design of photolytically driven systems. With the aid of this code, a photolytically driven iodine laser was designed for utilization with a 3He nuclear-pumped system with a TRIGA reactor as the neutron source. Calculations predict a peak power output of 0.37 kW. Using the same design, it is also anticipated that the system can achieve a 14-kW output using a fast burst-type reactor neutron source, and a 0.65-kW peak output using 0.1 Torr of the alpha emitter radon-220 as part of the fill. The latter would represent a truly portable laser system.
20. Epidermal photonic devices for quantitative imaging of temperature and thermal transport characteristics of the skin
Gao, Li; Zhang, Yihui; Malyarchuk, Viktor; Jia, Lin; Jang, Kyung-In; Chad Webb, R.; Fu, Haoran; Shi, Yan; Zhou, Guoyan; Shi, Luke; Shah, Deesha; Huang, Xian; Xu, Baoxing; Yu, Cunjiang; Huang, Yonggang; Rogers, John A.
2014-09-01
Characterization of temperature and thermal transport properties of the skin can yield important information of relevance to both clinical medicine and basic research in skin physiology. Here we introduce an ultrathin, compliant skin-like, or ‘epidermal’, photonic device that combines colorimetric temperature indicators with wireless stretchable electronics for thermal measurements when softly laminated on the skin surface. The sensors exploit thermochromic liquid crystals patterned into large-scale, pixelated arrays on thin elastomeric substrates; the electronics provide means for controlled, local heating by radio frequency signals. Algorithms for extracting patterns of colour recorded from these devices with a digital camera and computational tools for relating the results to underlying thermal processes near the skin surface lend quantitative value to the resulting data. Application examples include non-invasive spatial mapping of skin temperature with milli-Kelvin precision (±50 mK) and sub-millimetre spatial resolution. Demonstrations in reactive hyperaemia assessments of blood flow and hydration analysis establish relevance to cardiovascular health and skin care, respectively.
1. Status of Monte Carlo at Los Alamos
SciTech Connect
Thompson, W.L.; Cashwell, E.D.
1980-01-01
At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time.
2. Monte Carlo investigation of the increased radiation deposition due to gold nanoparticles using kilovoltage and megavoltage photons in a 3D randomized cell model
SciTech Connect
Douglass, Michael; Bezak, Eva; Penfold, Scott
2013-07-15
Purpose: Investigation of increased radiation dose deposition due to gold nanoparticles (GNPs) using a 3D computational cell model during x-ray radiotherapy.Methods: Two GNP simulation scenarios were set up in Geant4; a single 400 nm diameter gold cluster randomly positioned in the cytoplasm and a 300 nm gold layer around the nucleus of the cell. Using an 80 kVp photon beam, the effect of GNP on the dose deposition in five modeled regions of the cell including cytoplasm, membrane, and nucleus was simulated. Two Geant4 physics lists were tested: the default Livermore and custom built Livermore/DNA hybrid physics list. 10{sup 6} particles were simulated at 840 cells in the simulation. Each cell was randomly placed with random orientation and a diameter varying between 9 and 13 {mu}m. A mathematical algorithm was used to ensure that none of the 840 cells overlapped. The energy dependence of the GNP physical dose enhancement effect was calculated by simulating the dose deposition in the cells with two energy spectra of 80 kVp and 6 MV. The contribution from Auger electrons was investigated by comparing the two GNP simulation scenarios while activating and deactivating atomic de-excitation processes in Geant4.Results: The physical dose enhancement ratio (DER) of GNP was calculated using the Monte Carlo model. The model has demonstrated that the DER depends on the amount of gold and the position of the gold cluster within the cell. Individual cell regions experienced statistically significant (p < 0.05) change in absorbed dose (DER between 1 and 10) depending on the type of gold geometry used. The DER resulting from gold clusters attached to the cell nucleus had the more significant effect of the two cases (DER {approx} 55). The DER value calculated at 6 MV was shown to be at least an order of magnitude smaller than the DER values calculated for the 80 kVp spectrum. Based on simulations, when 80 kVp photons are used, Auger electrons have a statistically insignificant (p < 0.05) effect on the overall dose increase in the cell. The low energy of the Auger electrons produced prevents them from propagating more than 250-500 nm from the gold cluster and, therefore, has a negligible effect on the overall dose increase due to GNP.Conclusions: The results presented in the current work show that the primary dose enhancement is due to the production of additional photoelectrons.
3. Elucidating the electron transport in semiconductors via Monte Carlo simulations: an inquiry-driven learning path for engineering undergraduates
Persano Adorno, Dominique; Pizzolato, Nicola; Fazio, Claudio
2015-09-01
Within the context of higher education for science or engineering undergraduates, we present an inquiry-driven learning path aimed at developing a more meaningful conceptual understanding of the electron dynamics in semiconductors in the presence of applied electric fields. The electron transport in a nondegenerate n-type indium phosphide bulk semiconductor is modelled using a multivalley Monte Carlo approach. The main characteristics of the electron dynamics are explored under different values of the driving electric field, lattice temperature and impurity density. Simulation results are presented by following a question-driven path of exploration, starting from the validation of the model and moving up to reasoned inquiries about the observed characteristics of electron dynamics. Our inquiry-driven learning path, based on numerical simulations, represents a viable example of how to integrate a traditional lecture-based teaching approach with effective learning strategies, providing science or engineering undergraduates with practical opportunities to enhance their comprehension of the physics governing the electron dynamics in semiconductors. Finally, we present a general discussion about the advantages and disadvantages of using an inquiry-based teaching approach within a learning environment based on semiconductor simulations.
4. An investigation of the depth dose in the build-up region, and surface dose for a 6-MV therapeutic photon beam: Monte Carlo simulation and measurements
PubMed Central
Apipunyasopon, Lukkana; Srisatit, Somyot; Phaisangittisakul, Nakorn
2013-01-01
The percentage depth dose in the build-up region and the surface dose for the 6-MV photon beam from a Varian Clinac 23EX medical linear accelerator was investigated for square field sizes of 5 × 5, 10 × 10, 15 × 15 and 20 × 20 cm2using the EGS4nrc Monte Carlo (MC) simulation package. The depth dose was found to change rapidly in the build-up region, and the percentage surface dose increased proportionally with the field size from approximately 10% to 30%. The measurements were also taken using four common detectors: TLD chips, PFD dosimeter, parallel-plate and cylindrical ionization chamber, and compared with MC simulated data, which served as the gold standard in our study. The surface doses obtained from each detector were derived from the extrapolation of the measured depth doses near the surface and were all found to be higher than that of the MC simulation. The lowest and highest over-responses in the surface dose measurement were found with the TLD chip and the CC13 cylindrical ionization chamber, respectively. Increasing the field size increased the percentage surface dose almost linearly in the various dosimeters and also in the MC simulation. Interestingly, the use of the CC13 ionization chamber eliminates the high gradient feature of the depth dose near the surface. The correction factors for the measured surface dose from each dosimeter for square field sizes of between 5 × 5 and 20 × 20 cm2are introduced. PMID:23104898
5. A feasibility study to calculate unshielded fetal doses to pregnant patients in 6-MV photon treatments using Monte Carlo methods and anatomically realistic phantoms
SciTech Connect
Bednarz, Bryan; Xu, X. George
2008-07-15
A Monte Carlo-based procedure to assess fetal doses from 6-MV external photon beam radiation treatments has been developed to improve upon existing techniques that are based on AAPM Task Group Report 36 published in 1995 [M. Stovall et al., Med. Phys. 22, 63-82 (1995)]. Anatomically realistic models of the pregnant patient representing 3-, 6-, and 9-month gestational stages were implemented into the MCNPX code together with a detailed accelerator model that is capable of simulating scattered and leakage radiation from the accelerator head. Absorbed doses to the fetus were calculated for six different treatment plans for sites above the fetus and one treatment plan for fibrosarcoma in the knee. For treatment plans above the fetus, the fetal doses tended to increase with increasing stage of gestation. This was due to the decrease in distance between the fetal body and field edge with increasing stage of gestation. For the treatment field below the fetus, the absorbed doses tended to decrease with increasing gestational stage of the pregnant patient, due to the increasing size of the fetus and relative constant distance between the field edge and fetal body for each stage. The absorbed doses to the fetus for all treatment plans ranged from a maximum of 30.9 cGy to the 9-month fetus to 1.53 cGy to the 3-month fetus. The study demonstrates the feasibility to accurately determine the absorbed organ doses in the mother and fetus as part of the treatment planning and eventually in risk management.
6. Study of water transport phenomena on cathode of PEMFCs using Monte Carlo simulation
Soontrapa, Karn
This dissertation deals with the development of a three-dimensional computational model of water transport phenomena in the cathode catalyst layer (CCL) of PEMFCs. The catalyst layer in the numerical simulation was developed using the optimized sphere packing algorithm. The optimization technique named the adaptive random search technique (ARSET) was employed in this packing algorithm. The ARSET algorithm will generate the initial location of spheres and allow them to move in the random direction with the variable moving distance, randomly selected from the sampling range, based on the Lennard-jones potential of the current and new configuration. The solid fraction values obtained from this developed algorithm are in the range of 0.631 to 0.6384 while the actual processing time can significantly be reduced by 8% to 36% based on the number of spheres. The initial random number sampling range was investigated and the appropriate sampling range value is equal to 0.5. This numerically developed cathode catalyst layer has been used to simulate the diffusion processes of protons, in the form of hydronium, and oxygen molecules through the cathode catalyst layer. The movements of hydroniums and oxygen molecules are controlled by the random vectors and all of these moves has to obey the Lennard-Jones potential energy constrain. Chemical reaction between these two species will happen when they share the same neighborhood and result in the creation of water molecules. Like hydroniums and oxygen molecules, these newly-formed water molecules also diffuse through the cathode catalyst layer. It is important to investigate and study the distribution of hydronium oxygen molecule and water molecules during the diffusion process in order to understand the lifetime of the cathode catalyst layer. The effect of fuel flow rate on the water distribution has also been studied by varying the hydronium and oxygen molecule input. Based on the results of these simulations, the hydronium: oxygen input ratio of 3:2 has been found to be the best choice for this study. To study the effect of metal impurity and gas contamination on the cathode catalyst layer, the cathode catalyst layer structure is modified by adding the metal impurities and the gas contamination is introduced with the oxygen input. In this study, gas contamination has very little effect on the electrochemical reaction inside the cathode catalyst layer because this simulation is transient in nature and the percentage of the gas contamination is small, in the range of 0.0005% to 0.0015% for CO and 0.028% to 0.04% for CO2 . Metal impurities seem to have more effect on the performance of PEMFC because they not only change the structure of the developed cathode catalyst layer but also affect the movement of fuel and water product. Aluminum has the worst effect on the cathode catalyst layer structure because it yields the lowest amount of newly form water and the largest amount of trapped water product compared to iron of the same impurity percentage. For the iron impurity, it shows some positive effect on the life time of the cathode catalyst layer. At the 0.75 wt% of iron impurity, the amount of newly formed water is 6.59% lower than the pure carbon catalyst layer case but the amount of trapped water product is 11.64% lower than the pure catalyst layer. The lifetime of the impure cathode catalyst layer is longer than the pure one because the amount of water that is still trapped inside the pure cathode catalyst layer is higher than that of the impure one. Even though the impure cathode catalyst layer has a longer lifetime, it sacrifices the electrical power output because the electrochemical reaction occurrence inside the impure catalyst layer is lower.
7. Overview of the MCU Monte Carlo Software Package
Kalugin, M. A.; Oleynik, D. S.; Shkarovsky, D. A.
2014-06-01
MCU (Monte Carlo Universal) is a project on development and practical use of a universal computer code for simulation of particle transport (neutrons, photons, electrons, positrons) in three-dimensional systems by means of the Monte Carlo method. This paper provides the information on the current state of the project. The developed libraries of constants are briefly described, and the potentialities of the MCU-5 package modules and the executable codes compiled from them are characterized. Examples of important problems of reactor physics solved with the code are presented.
8. Vectorizing and macrotasking Monte Carlo neutral particle algorithms
SciTech Connect
Heifetz, D.B.
1987-04-01
Monte Carlo algorithms for computing neutral particle transport in plasmas have been vectorized and macrotasked. The techniques used are directly applicable to Monte Carlo calculations of neutron and photon transport, and Monte Carlo integration schemes in general. A highly vectorized code was achieved by calculating test flight trajectories in loops over arrays of flight data, isolating the conditional branches to as few a number of loops as possible. A number of solutions are discussed to the problem of gaps appearing in the arrays due to completed flights, which impede vectorization. A simple and effective implementation of macrotasking is achieved by dividing the calculation of the test flight profile among several processors. A tree of random numbers is used to ensure reproducible results. The additional memory required for each task may preclude using a larger number of tasks. In future machines, the limit of macrotasking may be possible, with each test flight, and split test flight, being a separate task.
9. Monte-Carlo simulation for an aerogel Cherenkov counter
Suda, R.; Watanabe, M.; Enomoto, R.; Iijima, T.; Adachi, I.; Hattori, H.; Kuniya, T.; Ooba, T.; Sumiyoshi, T.; Yoshida, Y.
1998-02-01
We have developed a Monte-Carlo simulation code for an aerogel Cherenkov counter which is operated under a strong magnetic field such as 1.5T. This code consists of two parts: photon transportation inside aerogel tiles, and one-dimensional amplification in a fine-mesh photomultiplier tube. It simulates the output photo-electron yields as accurately as 5% with only a single free parameter. This code is applied to simulations for a B-factory particle identification system.
10. The All Particle Monte Carlo method: Atomic data files
SciTech Connect
Rathkopf, J.A.; Cullen, D.E.; Perkins, S.T.
1990-11-06
Development of the All Particle Method, a project to simulate the transport of particles via the Monte Carlo method, has proceeded on two fronts: data collection and algorithm development. In this paper we report on the status of the data libraries. The data collection is nearly complete with the addition of electron, photon, and atomic data libraries to the existing neutron, gamma ray, and charged particle libraries. The contents of these libraries are summarized.
11. A Monte Carlo neutron transport code for eigenvalue calculations on a dual-GPU system and CUDA environment
SciTech Connect
Liu, T.; Ding, A.; Ji, W.; Xu, X. G.; Carothers, C. D.; Brown, F. B.
2012-07-01
Monte Carlo (MC) method is able to accurately calculate eigenvalues in reactor analysis. Its lengthy computation time can be reduced by general-purpose computing on Graphics Processing Units (GPU), one of the latest parallel computing techniques under development. The method of porting a regular transport code to GPU is usually very straightforward due to the 'embarrassingly parallel' nature of MC code. However, the situation becomes different for eigenvalue calculation in that it will be performed on a generation-by-generation basis and the thread coordination should be explicitly taken care of. This paper presents our effort to develop such a GPU-based MC code in Compute Unified Device Architecture (CUDA) environment. The code is able to perform eigenvalue calculation under simple geometries on a multi-GPU system. The specifics of algorithm design, including thread organization and memory management were described in detail. The original CPU version of the code was tested on an Intel Xeon X5660 2.8 GHz CPU, and the adapted GPU version was tested on NVIDIA Tesla M2090 GPUs. Double-precision floating point format was used throughout the calculation. The result showed that a speedup of 7.0 and 33.3 were obtained for a bare spherical core and a binary slab system respectively. The speedup factor was further increased by a factor of {approx}2 on a dual GPU system. The upper limit of device-level parallelism was analyzed, and a possible method to enhance the thread-level parallelism was proposed. (authors)
12. Assessment of Parametric Uncertainty using Markov Chain Monte Carlo Methods for Surface Complexation Models in Groundwater Reactive Transport Modeling
Miller, G. L.; Lu, D.; Ye, M.; Curtis, G. P.; Mendes, B. S.; Draper, D.
2010-12-01
Parametric uncertainty in groundwater modeling is commonly assessed using the first-order-second-moment method, which yields the linear confidence/prediction intervals. More advanced techniques are able to produce the nonlinear confidence/prediction intervals that are more accurate than the linear intervals for nonlinear models. However, both the methods are restricted to certain assumptions such as normality in model parameters. We developed a Markov Chain Monte Carlo (MCMC) method to directly investigate the parametric distributions and confidence/prediction intervals. The MCMC results are used to evaluate accuracy of the linear and nonlinear confidence/prediction intervals. The MCMC method is applied to nonlinear surface complexation models developed by Kohler et al. (1996) to simulate reactive transport of uranium (VI). The breakthrough data of Kohler et al. (1996) obtained from a series of column experiments are used as the basis of the investigation. The calibrated parameters of the models are the equilibrium constants of the surface complexation reactions and fractions of functional groups. The Morris method sensitivity analysis shows that all of the parameters exhibit highly nonlinear effects on the simulation. The MCMC method is combined with traditional optimization method to improve computational efficiency. The parameters of the surface complexation models are first calibrated using a global optimization technique, multi-start quasi-Newton BFGS, which employs an approximation to the Hessian. The parameter correlation is measured by the covariance matrix computed via the Fisher information matrix. Parameter ranges are necessary to improve convergence of the MCMC simulation, even when the adaptive Metropolis method is used. The MCMC results indicate that the parameters do not necessarily follow a normal distribution and that the nonlinear intervals are more accurate than the linear intervals for the nonlinear surface complexation models. In comparison with the linear and nonlinear prediction intervals, the prediction intervals of MCMC are more robust to simulate the breakthrough curves that are not used for the parameter calibration and estimation of parameter distributions.
13. Time-correlated photon-counting probe of singlet excitation transport and restricted rotation in Langmuir-Blodgett monolayers
SciTech Connect
Anfinrud, P.A.; Hart, D.E.; Struve, W.S.
1988-07-14
Fluorescence depolarization was monitored by time-correlated single-photon counting in organized monolayers of octadecylrhodamine B (ODRB) in dioleoylphosphatidylcholine (DOL) at air-water interfaces. At low ORDB density, the depolarization was dominated by restricted rotational diffusion. Increases in surface pressure reduced both the angular range and the diffusion constant for rotational motion. At higher ODRB densities, additional depolarization was observed due to electronic excitation transport. A two-dimensional two-particle theory developed by Baumann and Fayer was found to provide an excellent description of the transport dynamics for reduced chromophore densities up to /approximately/ 5.0. The testing of transport theories proves to be relatively insensitive to the orientational distribution assumed for the ODRB transition moments in their two-dimensional systems.
14. The transport character of quantum state in one-dimensional coupled-cavity arrays: effect of the number of photons and entanglement degree
Ma, Shao-Qiang; Zhang, Guo-Feng
2015-12-01
The transport properties of the photons injected into one-dimensional coupled-cavity arrays (CCAs) are studied. It is found that the number of photons cannot change the evolution cycle of the system and the time points at which W states and NOON state are obtained with a relatively higher probability. Transport dynamics in the CCAs exhibits that entanglement-enhanced state transmission is more effective phenomenon, and we show that for a quantum state with the maximum concurrence, it can be transmitted completely without considering the case of photon loss.
15. Monte Carlo fundamentals
SciTech Connect
Brown, F.B.; Sutton, T.M.
1996-02-01
This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.
16. Transport calculations for a 14.8 MeV neutron beam in a water phantom
Goetsch, S. J.
A coupled neutron/photon Monte Carlo radiation transport code (MORSE-CG) was used to calculate neutron and photon doses in a water phantom irradiated by 14.8 MeV neutron from the gas target neutron source. The source-collimator-phantom geometry was carefully simulated. Results of calculations utilizing two different statistical estimators (next collision and track length) are presented.
17. Verification by Monte Carlo methods of a power law tissue-air ratio algorithm for inhomogeneity corrections in photon beam dose calculations.
PubMed
Webb, S; Fox, R A
1980-03-01
A Monte Carlo computer program has been used to calculate axial and off-axis depth dose distributions arising from the interaction of an external beam of 60Co radiation with a medium containing inhomogeneities. An approximation for applying the Monte Carlo data to the configuration where the lateral extent of the inhomogeneity is less than the beam area, is also presented. These new Monte Carlo techniques rely on integration over the dose distributions from constituent sub-beams of small area and the accuracy of the method is thus independent of beam size. The power law correction equation (Batho equation) describing the dose distribution in the presence of tissue inhomogeneities is derived in its most general form. By comparison with Monte Carlo reference data, the equation is validated for routine patient dosimetry. It is explained why the Monte Carlo data may be regarded as a fundamental reference point in performing these tests of the extension to the Batho equation. Other analytic correction techniques, e.g. the equivalent radiological path method, are shown to be less accurate. The application of the generalised power law equation in conjunction with CT scanner data is discussed. For ease of presentation, the details of the Monte Carlo techniques and the analytic formula have been separated into appendices. PMID:7384209
18. Guiding Electromagnetic Waves around Sharp Corners: Topologically Protected Photonic Transport in Metawaveguides
Ma, Tzuhsuan; Khanikaev, Alexander B.; Mousavi, S. Hossein; Shvets, Gennady
2015-03-01
The wave nature of radiation prevents its reflections-free propagation around sharp corners. We demonstrate that a simple photonic structure based on a periodic array of metallic cylinders attached to one of the two confining metal plates can emulate spin-orbit interaction through bianisotropy. Such a metawaveguide behaves as a photonic topological insulator with complete topological band gap. An interface between two such structures with opposite signs of the bianisotropy supports topologically protected surface waves, which can be guided without reflections along sharp bends of the interface.
19. Guiding electromagnetic waves around sharp corners: topologically protected photonic transport in metawaveguides.
PubMed
Ma, Tzuhsuan; Khanikaev, Alexander B; Mousavi, S Hossein; Shvets, Gennady
2015-03-27
The wave nature of radiation prevents its reflections-free propagation around sharp corners. We demonstrate that a simple photonic structure based on a periodic array of metallic cylinders attached to one of the two confining metal plates can emulate spin-orbit interaction through bianisotropy. Such a metawaveguide behaves as a photonic topological insulator with complete topological band gap. An interface between two such structures with opposite signs of the bianisotropy supports topologically protected surface waves, which can be guided without reflections along sharp bends of the interface. PMID:25860770
20. MCNP/X TRANSPORT IN THE TABULAR REGIME
SciTech Connect
2007-01-08
The authors review the transport capabilities of the MCNP and MCNPX Monte Carlo codes in the energy regimes in which tabular transport data are available. Giving special attention to neutron tables, they emphasize the measures taken to improve the treatment of a variety of difficult aspects of the transport problem, including unresolved resonances, thermal issues, and the availability of suitable cross sections sets. They also briefly touch on the current situation in regard to photon, electron, and proton transport tables.
1. An Electron/Photon/Relaxation Data Library for MCNP6
SciTech Connect
2015-08-07
The capabilities of the MCNP6 Monte Carlo code in simulation of electron transport, photon transport, and atomic relaxation have recently been significantly expanded. The enhancements include not only the extension of existing data and methods to lower energies, but also the introduction of new categories of data and methods. Support of these new capabilities has required major additions to and redesign of the associated data tables. In this paper we present the first complete documentation of the contents and format of the new electron-photon-relaxation data library now available with the initial production release of MCNP6.
2. A combined approach of variance-reduction techniques for the efficient Monte Carlo simulation of linacs
Rodriguez, M.; Sempau, J.; Brualla, L.
2012-05-01
A method based on a combination of the variance-reduction techniques of particle splitting and Russian roulette is presented. This method improves the efficiency of radiation transport through linear accelerator geometries simulated with the Monte Carlo method. The method named as ‘splitting-roulette’ was implemented on the Monte Carlo code \\scriptsize{{PENELOPE}} and tested on an Elekta linac, although it is general enough to be implemented on any other general-purpose Monte Carlo radiation transport code and linac geometry. Splitting-roulette uses any of the following two modes of splitting: simple splitting and ‘selective splitting’. Selective splitting is a new splitting mode based on the angular distribution of bremsstrahlung photons implemented in the Monte Carlo code \\scriptsize{{PENELOPE}}. Splitting-roulette improves the simulation efficiency of an Elekta SL25 linac by a factor of 45.
3. A combined approach of variance-reduction techniques for the efficient Monte Carlo simulation of linacs.
PubMed
Rodriguez, M; Sempau, J; Brualla, L
2012-05-21
A method based on a combination of the variance-reduction techniques of particle splitting and Russian roulette is presented. This method improves the efficiency of radiation transport through linear accelerator geometries simulated with the Monte Carlo method. The method named as 'splitting-roulette' was implemented on the Monte Carlo code [Formula: see text] and tested on an Elekta linac, although it is general enough to be implemented on any other general-purpose Monte Carlo radiation transport code and linac geometry. Splitting-roulette uses any of the following two modes of splitting: simple splitting and 'selective splitting'. Selective splitting is a new splitting mode based on the angular distribution of bremsstrahlung photons implemented in the Monte Carlo code [Formula: see text]. Splitting-roulette improves the simulation efficiency of an Elekta SL25 linac by a factor of 45. PMID:22538321
4. Design and fabrication of hollow-core photonic crystal fibers for high-power ultrashort pulse transportation and pulse compression.
TOXLINE Toxicology Bibliographic Information
Wang YY; Peng X; Alharbi M; Dutin CF; Bradley TD; Gérôme F; Mielke M; Booth T; Benabid F
2012-08-01
We report on the recent design and fabrication of kagome-type hollow-core photonic crystal fibers for the purpose of high-power ultrashort pulse transportation. The fabricated seven-cell three-ring hypocycloid-shaped large core fiber exhibits an up-to-date lowest attenuation (among all kagome fibers) of 40 dB/km over a broadband transmission centered at 1500 nm. We show that the large core size, low attenuation, broadband transmission, single-mode guidance, and low dispersion make it an ideal host for high-power laser beam transportation. By filling the fiber with helium gas, a 74 μJ, 850 fs, and 40 kHz repetition rate ultrashort pulse at 1550 nm has been faithfully delivered at the fiber output with little propagation pulse distortion. Compression of a 105 μJ laser pulse from 850 fs down to 300 fs has been achieved by operating the fiber in ambient air.
5. FW-CADIS Method for Global and Semi-Global Variance Reduction of Monte Carlo Radiation Transport Calculations
SciTech Connect
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W
2014-01-01
6. Inclusion of photon production and transport and (e/sup +/e/sup /minus//) pair production in a particle-in-cell code for astrophysical applications
SciTech Connect
Sulkanen, M.E.; Gisler, G.R.
1989-01-01
This present study constitutes the first attempt to include, in a particle-in-cell code, the effects of radiation losses, photon production and transport, and charged-particle production by photons scattering in an intense background magnetic field. We discuss the physics and numerical issues that had to be addressed in including these effects in the ISIS code. Then we present a test simulation of the propagation of a pulse of high-energy photons across an intense magnetic field using this modified version of ISIS. This simulation demonstrates dissipation of the photon pulse with charged-particle production, emission of secondary synchrotron and curvature photons and the concomitant momentum dissipation of the charged particles, and subsequent production of lower-energy pairs. 5 refs.
7. Utilizing Monte-Carlo radiation transport and spallation cross sections to estimate nuclide dependent scaling with altitude
Argento, D.; Reedy, R. C.; Stone, J.
2010-12-01
Cosmogenic Nuclides (CNs) are a critical new tool for geomorphology, allowing researchers to date Earth surface events and measure process rates [1]. Prior to CNs, many of these events and processes had no absolute method for measurement and relied entirely on relative methods [2]. Continued improvements in CN methods are necessary for expanding analytic capability in geomorphology. In the last two decades, significant progress has been made in refining these methods and reducing analytic uncertainties [1,3]. Calibration data and scaling methods are being developed to provide a self consistent platform for use in interpreting nuclide concentration values into geologic data [4]. However, nuclide dependent scaling has been difficult to address due to analytic uncertainty and sparseness in altitude transects. Artificial target experiments are underway, but these experiments take considerable time for nuclide buildup in lower altitudes. In this study, a Monte Carlo method radiation transport code, MCNPX, is used to model the galactic cosmic-ray radiation impinging on the upper atmosphere and track the resulting secondary particles through a model of the Earths atmosphere and lithosphere. To address the issue of nuclide dependent scaling, the neutron flux values determined by the MCNPX simulation are folded in with estimated cross-section values [5,6]. Preliminary calculations indicate that scaling of nuclide production potential in free air seems to be a function of both altitude and nuclide production pathway. At 0 g/cm2 (sea-level) all neutron spallation pathways have attenuation lengths within 1% of 130 g/cm2. However, the differences in attenuation length are exacerbated with increasing altitude. At 530 g/cm2 atmospheric height (~5,500 m), the apparent attenuation lengths for aggregate SiO2(n,x)10Be, aggregate SiO2(n,x)14C and K(n,x)36Cl become 149.5 g/cm2, 151 g/cm2 and 148 g/cm2 respectively. At 700 g/cm2 atmospheric height (~8,400m - close to the highest possible sampling altitude), the apparent attenuation lengths become 171 g/cm2, 174 g/cm2 and 165 g/cm2 respectively, a difference of +/-5%. Based on this preliminary data, there may be up to 6% error in production rate scaling. Proton spallation is a small, yet important component of spallation events. This data will be also be presented along with the neutron results. While the differences between attenuation length for individual nuclides are small at sea-level, they are systematic and exacerbate with altitude. Until now, there has been no numeric analysis of this phenomenon, therefore the global scaling schemes for CNs have been missing an aspect of physics critical for achieving close agreement between empiric calibration data and physics based models. [1] T. J. Dunai, "Cosmogenic Nuclides: Principles, Concepts and Applications in the Earth Surface Sciences", Cambridge University Press, Cambridge, 2010 [2] D. Lal, Annual Rev of Earth Planet Sci, 1988, p355-388 [3] J. Gosse and F. Phillips, Quaternary Science Rev, 2001, p1475-1560 [4] F. Phillips et al.,(Proposal to the National Science Foundation), 2003 [5] K. Nishiizumi etal., Geochimica et Cosmochimica Acta, 2009, p2163-2176 [6] R. C. Reedy, personal com.
8. The role of plasma evolution and photon transport in optimizing future advanced lithography sources
SciTech Connect
Sizyuk, Tatyana; Hassanein, Ahmed
2013-08-28
Laser produced plasma (LPP) sources for extreme ultraviolet (EUV) photons are currently based on using small liquid tin droplets as target that has many advantages including generation of stable continuous targets at high repetition rate, larger photons collection angle, and reduced contamination and damage to the optical mirror collection system from plasma debris and energetic particles. The ideal target is to generate a source of maximum EUV radiation output and collection in the 13.5 nm range with minimum atomic debris. Based on recent experimental results and our modeling predictions, the smallest efficient droplets are of diameters in the range of 2030 ?m in LPP devices with dual-beam technique. Such devices can produce EUV sources with conversion efficiency around 3% and with collected EUV power of 190 W or more that can satisfy current requirements for high volume manufacturing. One of the most important characteristics of these devices is in the low amount of atomic debris produced due to the small initial mass of droplets and the significant vaporization rate during the pre-pulse stage. In this study, we analyzed in detail plasma evolution processes in LPP systems using small spherical tin targets to predict the optimum droplet size yielding maximum EUV output. We identified several important processes during laser-plasma interaction that can affect conditions for optimum EUV photons generation and collection. The importance and accurate description of modeling these physical processes increase with the decrease in target size and its simulation domain.
9. The effect of biological shielding on fast neutron and photon transport in the VVER-1000 mock-up model placed in the LR-0 reactor.
PubMed
Ko?l, Michal; Cvachovec, Frantiek; Mil?k, Jn; Mravec, Filip
2013-05-01
The paper is intended to show the effect of a biological shielding simulator on fast neutron and photon transport in its vicinity. The fast neutron and photon fluxes were measured by means of scintillation spectroscopy using a 4545 mm(2) and a 1010 mm(2) cylindrical stilbene detector. The neutron spectrum was measured in the range of 0.6-10 MeV and the photon spectrum in 0.2-9 MeV. The results of the experiment are compared with calculations. The calculations were performed with various nuclear data libraries. PMID:23434890
10. Utilization of Monte Carlo Calculations in Radiation Transport Analyses to Support the Design of the U.S. Spallation Neutron Source (SNS)
SciTech Connect
Johnson, J.O.
2000-10-23
The Department of Energy (DOE) has given the Spallation Neutron Source (SNS) project approval to begin Title I design of the proposed facility to be built at Oak Ridge National Laboratory (ORNL) and construction is scheduled to commence in FY01 . The SNS initially will consist of an accelerator system capable of delivering an {approximately}0.5 microsecond pulse of 1 GeV protons, at a 60 Hz frequency, with 1 MW of beam power, into a single target station. The SNS will eventually be upgraded to a 2 MW facility with two target stations (a 60 Hz station and a 10 Hz station). The radiation transport analysis, which includes the neutronic, shielding, activation, and safety analyses, is critical to the design of an intense high-energy accelerator facility like the proposed SNS, and the Monte Carlo method is the cornerstone of the radiation transport analyses.
11. Comparison of experimental and Monte-Carlo simulation of MeV particle transport through tapered/straight glass capillaries and circular collimators
Hespeels, F.; Tonneau, R.; Ikeda, T.; Lucas, S.
2015-11-01
This study compares the capabilities of three different passive collimation devices to produce micrometer-sized beams for proton and alpha particle beams (1.7 MeV and 5.3 MeV respectively): classical platinum TEM-like collimators, straight glass capillaries and tapered glass capillaries. In addition, we developed a Monte-Carlo code, based on the Rutherford scattering theory, which simulates particle transportation through collimating devices. The simulation results match the experimental observations of beam transportation through collimators both in air and vacuum. This research shows the focusing effects of tapered capillaries which clearly enable higher transmission flux. Nevertheless, the capillaries alignment with an incident beam is a prerequisite but is tedious, which makes the TEM collimator the easiest way to produce a 50 ?m microbeam.
12. Monte Carlo tests of small-world architecture for coarse-grained networks of the United States railroad and highway transportation systems
Aldrich, Preston R.; El-Zabet, Jermeen; Hassan, Seerat; Briguglio, Joseph; Aliaj, Enela; Radcliffe, Maria; Mirza, Taha; Comar, Timothy; Nadolski, Jeremy; Huebner, Cynthia D.
2015-11-01
Several studies have shown that human transportation networks exhibit small-world structure, meaning they have high local clustering and are easily traversed. However, some have concluded this without statistical evaluations, and others have compared observed structure to globally random rather than planar models. Here, we use Monte Carlo randomizations to test US transportation infrastructure data for small-worldness. Coarse-grained network models were generated from GIS data wherein nodes represent the 3105 contiguous US counties and weighted edges represent the number of highway or railroad links between counties; thus, we focus on linkage topologies and not geodesic distances. We compared railroad and highway transportation networks with a simple planar network based on county edge-sharing, and with networks that were globally randomized and those that were randomized while preserving their planarity. We conclude that terrestrial transportation networks have small-world architecture, as it is classically defined relative to global randomizations. However, this topological structure is sufficiently explained by the planarity of the graphs, and in fact the topological patterns established by the transportation links actually serve to reduce the amount of small-world structure.
13. MORSE Monte Carlo code
SciTech Connect
Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.
14. Effective QCD and transport description of dilepton and photon production in heavy-ion collisions and elementary processes
Linnyk, O.; Bratkovskaya, E. L.; Cassing, W.
2016-03-01
In this review we address the dynamics of relativistic heavy-ion reactions and in particular the information obtained from electromagnetic probes that stem from the partonic and hadronic phases. The out-of-equilibrium description of strongly interacting relativistic fields is based on the theory of Kadanoff and Baym. For the modeling of the partonic phase we introduce an effective dynamical quasiparticle model (DQPM) for QCD in equilibrium. In the DQPM, the widths and masses of the dynamical quasiparticles are controlled by transport coefficients that can be compared to the corresponding quantities from lattice QCD. The resulting off-shell transport approach is denoted by Parton-Hadron-String Dynamics (PHSD) and includes covariant dynamical transition rates for hadronization and keeps track of the hadronic interactions in the final phase. It is shown that the PHSD captures the bulk dynamics of heavy-ion collisions from lower SPS to LHC energies and thus provides a solid basis for the evaluation of the electromagnetic emissivity, which is calculated on the basis of the same dynamical parton propagators that are employed for the dynamical evolution of the partonic system. The production of direct photons in elementary processes and heavy-ion reactions is discussed and the present status of the photon v2 "puzzle"-a large elliptic flow v2 of the direct photons experimentally observed in heavy-ion collisions-is addressed for nucleus-nucleus reactions at RHIC and LHC energies. The role of hadronic and partonic sources for the photon spectra and the flow coefficients v2 and v3 is considered as well as the possibility to subtract the QGP signal from the experimental observables. Furthermore, the production of e+e- or μ+μ- pairs in elementary processes and A + A reactions is addressed. The calculations within the PHSD from SIS to LHC energies show an increase of the low mass dilepton yield essentially due to the in-medium modification of the ρ-meson and at the lowest energy also due to a multiple regeneration of Δ-resonances. Furthermore, pronounced traces of the partonic degrees-of-freedom are found in the intermediate dilepton mass regime (1.2 GeV < M < 3 GeV) at relativistic energies, which will also shed light on the nature of the very early degrees-of-freedom in nucleus-nucleus collisions.
15. Calculs Monte Carlo en transport d'energie pour le calcul de la dose en radiotherapie sur plateforme graphique hautement parallele
Hissoiny, Sami
Dose calculation is a central part of treatment planning. The dose calculation must be 1) accurate so that the medical physicists and the radio-oncologists can make a decision based on results close to reality and 2) fast enough to allow a routine use of dose calculation. The compromise between these two factors in opposition gave way to the creation of several dose calculation algorithms, from the most approximate and fast to the most accurate and slow. The most accurate of these algorithms is the Monte Carlo method, since it is based on basic physical principles. Since 2007, a new computing platform gains popularity in the scientific computing community: the graphics processor unit (GPU). The hardware platform exists since before 2007 and certain scientific computations were already carried out on the GPU. Year 2007, on the other hand, marks the arrival of the CUDA programming language which makes it possible to disregard graphic contexts to program the GPU. The GPU is a massively parallel computing platform and is adapted to data parallel algorithms. This thesis aims at knowing how to maximize the use of a graphics processing unit (GPU) to speed up the execution of a Monte Carlo simulation for radiotherapy dose calculation. To answer this question, the GPUMCD platform was developed. GPUMCD implements the simulation of a coupled photon-electron Monte Carlo simulation and is carried out completely on the GPU. The first objective of this thesis is to evaluate this method for a calculation in external radiotherapy. Simple monoenergetic sources and phantoms in layers are used. A comparison with the EGSnrc platform and DPM is carried out. GPUMCD is within a gamma criteria of 2%-2mm against EGSnrc while being at least 1200x faster than EGSnrc and 250x faster than DPM. The second objective consists in the evaluation of the platform for brachytherapy calculation. Complex sources based on the geometry and the energy spectrum of real sources are used inside a TG-43 reference geometry. Differences of less than 4% are found compared to the BrachyDose platforms well as TG-43 consensus data. The third objective aims at the use of GPUMCD for dose calculation within MRI-Linac environment. To this end, the effect of the magnetic field on charged particles has been added to the simulation. It was shown that GPUMCD is within a gamma criteria of 2%-2mm of two experiments aiming at highlighting the influence of the magnetic field on the dose distribution. The results suggest that the GPU is an interesting computing platform for dose calculations through Monte Carlo simulations and that software platform GPUMCD makes it possible to achieve fast and accurate results.
16. Review of Fast Monte Carlo Codes for Dose Calculation in Radiation Therapy Treatment Planning
PubMed Central
Jabbari, Keyvan
2011-01-01
An important requirement in radiation therapy is a fast and accurate treatment planning system. This system, using computed tomography (CT) data, direction, and characteristics of the beam, calculates the dose at all points of the patient's volume. The two main factors in treatment planning system are accuracy and speed. According to these factors, various generations of treatment planning systems are developed. This article is a review of the Fast Monte Carlo treatment planning algorithms, which are accurate and fast at the same time. The Monte Carlo techniques are based on the transport of each individual particle (e.g., photon or electron) in the tissue. The transport of the particle is done using the physics of the interaction of the particles with matter. Other techniques transport the particles as a group. For a typical dose calculation in radiation therapy the code has to transport several millions particles, which take a few hours, therefore, the Monte Carlo techniques are accurate, but slow for clinical use. In recent years, with the development of the fast Monte Carlo systems, one is able to perform dose calculation in a reasonable time for clinical use. The acceptable time for dose calculation is in the range of one minute. There is currently a growing interest in the fast Monte Carlo treatment planning systems and there are many commercial treatment planning systems that perform dose calculation in radiation therapy based on the Monte Carlo technique. PMID:22606661
17. Spatiotemporal Monte Carlo transport methods in x-ray semiconductor detectors: Application to pulse-height spectroscopy in a-Se
SciTech Connect
2012-01-15
Purpose: The authors describe a detailed Monte Carlo (MC) method for the coupled transport of ionizing particles and charge carriers in amorphous selenium (a-Se) semiconductor x-ray detectors, and model the effect of statistical variations on the detected signal. Methods: A detailed transport code was developed for modeling the signal formation process in semiconductor x-ray detectors. The charge transport routines include three-dimensional spatial and temporal models of electron-hole pair transport taking into account recombination and trapping. Many electron-hole pairs are created simultaneously in bursts from energy deposition events. Carrier transport processes include drift due to external field and Coulombic interactions, and diffusion due to Brownian motion. Results: Pulse-height spectra (PHS) have been simulated with different transport conditions for a range of monoenergetic incident x-ray energies and mammography radiation beam qualities. Two methods for calculating Swank factors from simulated PHS are shown, one using the entire PHS distribution, and the other using the photopeak. The latter ignores contributions from Compton scattering and K-fluorescence. Comparisons differ by approximately 2% between experimental measurements and simulations. Conclusions: The a-Se x-ray detector PHS responses simulated in this work include three-dimensional spatial and temporal transport of electron-hole pairs. These PHS were used to calculate the Swank factor and compare it with experimental measurements. The Swank factor was shown to be a function of x-ray energy and applied electric field. Trapping and recombination models are all shown to affect the Swank factor.
18. Effect of burst and recombination models for Monte Carlo transport of interacting carriers in a-Se x-ray detectors on Swank noise
SciTech Connect
Fang, Yuan; Karim, Karim S.; Badano, Aldo
2014-01-15
Purpose: The authors describe the modification to a previously developed Monte Carlo model of semiconductor direct x-ray detector required for studying the effect of burst and recombination algorithms on detector performance. This work provides insight into the effect of different charge generation models for a-Se detectors on Swank noise and recombination fraction. Methods: The proposed burst and recombination models are implemented in the Monte Carlo simulation package, ARTEMIS, developed byFang et al. [Spatiotemporal Monte Carlo transport methods in x-ray semiconductor detectors: Application to pulse-height spectroscopy in a-Se, Med. Phys. 39(1), 308319 (2012)]. The burst model generates a cloud of electron-hole pairs based on electron velocity, energy deposition, and material parameters distributed within a spherical uniform volume (SUV) or on a spherical surface area (SSA). A simple first-hit (FH) and a more detailed but computationally expensive nearest-neighbor (NN) recombination algorithms are also described and compared. Results: Simulated recombination fractions for a single electron-hole pair show good agreement with Onsager model for a wide range of electric field, thermalization distance, and temperature. The recombination fraction and Swank noise exhibit a dependence on the burst model for generation of many electron-hole pairs from a single x ray. The Swank noise decreased for the SSA compared to the SUV model at 4 V/?m, while the recombination fraction decreased for SSA compared to the SUV model at 30 V/?m. The NN and FH recombination results were comparable. Conclusions: Results obtained with the ARTEMIS Monte Carlo transport model incorporating drift and diffusion are validated with the Onsager model for a single electron-hole pair as a function of electric field, thermalization distance, and temperature. For x-ray interactions, the authors demonstrate that the choice of burst model can affect the simulation results for the generation of many electron-hole pairs. The SSA model is more sensitive to the effect of electric field compared to the SUV model and that the NN and FH recombination algorithms did not significantly affect simulation results.
19. Effect of burst and recombination models for Monte Carlo transport of interacting carriers in a-Se x-ray detectors on Swank noise
SciTech Connect
Fang, Yuan; Karim, Karim S.; Badano, Aldo
2014-01-15
Purpose: The authors describe the modification to a previously developed Monte Carlo model of semiconductor direct x-ray detector required for studying the effect of burst and recombination algorithms on detector performance. This work provides insight into the effect of different charge generation models for a-Se detectors on Swank noise and recombination fraction. Methods: The proposed burst and recombination models are implemented in the Monte Carlo simulation package, ARTEMIS, developed byFang et al. [“Spatiotemporal Monte Carlo transport methods in x-ray semiconductor detectors: Application to pulse-height spectroscopy in a-Se,” Med. Phys. 39(1), 308–319 (2012)]. The burst model generates a cloud of electron-hole pairs based on electron velocity, energy deposition, and material parameters distributed within a spherical uniform volume (SUV) or on a spherical surface area (SSA). A simple first-hit (FH) and a more detailed but computationally expensive nearest-neighbor (NN) recombination algorithms are also described and compared. Results: Simulated recombination fractions for a single electron-hole pair show good agreement with Onsager model for a wide range of electric field, thermalization distance, and temperature. The recombination fraction and Swank noise exhibit a dependence on the burst model for generation of many electron-hole pairs from a single x ray. The Swank noise decreased for the SSA compared to the SUV model at 4 V/μm, while the recombination fraction decreased for SSA compared to the SUV model at 30 V/μm. The NN and FH recombination results were comparable. Conclusions: Results obtained with the ARTEMIS Monte Carlo transport model incorporating drift and diffusion are validated with the Onsager model for a single electron-hole pair as a function of electric field, thermalization distance, and temperature. For x-ray interactions, the authors demonstrate that the choice of burst model can affect the simulation results for the generation of many electron-hole pairs. The SSA model is more sensitive to the effect of electric field compared to the SUV model and that the NN and FH recombination algorithms did not significantly affect simulation results.
20. Thermal photon, dilepton production, and electric charge transport in a baryon rich strongly coupled QGP from holography
Finazzo, Stefano Ivo; Rougemont, Romulo
2016-02-01
We obtain the thermal photon and dilepton production rates in a strongly coupled quark-gluon plasma (QGP) at both zero and nonzero baryon chemical potentials using a bottom-up Einstein-Maxwell-dilaton holographic model that is in good quantitative agreement with the thermodynamics of (2 +1 )-flavor lattice QCD around the crossover transition for baryon chemical potentials up to 400 MeV, which may be reached in the beam energy scan at RHIC. We find that increasing the temperature T and the baryon chemical potential μB enhances the peak present in both spectra. We also obtain the electric charge susceptibility, the dc and ac electric conductivities, and the electric charge diffusion as functions of T and μB. We find that electric diffusive transport is suppressed as one increases μB. At zero baryon density, we compare our results for the dc electric conductivity and the electric charge diffusion with the latest lattice data available for these observables and find reasonable agreement around the crossover transition. Therefore, our holographic results may be used to constraint the magnitude of the thermal photon and dilepton production rates in a strongly coupled QGP, which we found to be at least 1 order of magnitude below perturbative estimates.
1. Quantum Dot Optical Frequency Comb Laser with Mode-Selection Technique for 1-μm Waveband Photonic Transport System
Naokatsu Yamamoto,; Kouichi Akahane,; Tetsuya Kawanishi,; Redouane Katouf,; Hideyuki Sotobayashi,
2010-04-01
An optical frequency comb was generated from a single quantum dot laser diode (QD-LD) in the 1-μm waveband using an Sb-irradiated InGaAs/GaAs QD active medium. A single-mode-selection technique and interference injection-seeding technique are proposed for selecting the optical mode of a QD optical frequency comb laser (QD-CML). In the 1-μm waveband, a wavelength-tunable single-mode light source and a multiple-wavelength generator of a comb with 100-GHz spacing and ultrafine teeth are successfully demonstrated by applying the optical-mode-selection techniques to the QD-CML. Additionally, by applying the single-mode-selection technique to the QD-CML, a 10-Gbps clear eye opening for multiple-wavelengths in 1-μm waveband photonic transport over a 1.5-km-long holey fiber is obtained.
2. Quantum Dot Optical Frequency Comb Laser with Mode-Selection Technique for 1-µm Waveband Photonic Transport System
Yamamoto, Naokatsu; Akahane, Kouichi; Kawanishi, Tetsuya; Katouf, Redouane; Sotobayashi, Hideyuki
2010-04-01
An optical frequency comb was generated from a single quantum dot laser diode (QD-LD) in the 1-µm waveband using an Sb-irradiated InGaAs/GaAs QD active medium. A single-mode-selection technique and interference injection-seeding technique are proposed for selecting the optical mode of a QD optical frequency comb laser (QD-CML). In the 1-µm waveband, a wavelength-tunable single-mode light source and a multiple-wavelength generator of a comb with 100-GHz spacing and ultrafine teeth are successfully demonstrated by applying the optical-mode-selection techniques to the QD-CML. Additionally, by applying the single-mode-selection technique to the QD-CML, a 10-Gbps clear eye opening for multiple-wavelengths in 1-µm waveband photonic transport over a 1.5-km-long holey fiber is obtained.
3. Monte Carlo Simulations of a Human Phantom Radio-Pharmacokinetic Response on a Small Field of View Scintigraphic Device
Burgio, N.; Ciavola, C.; Santagata, A.; Iurlaro, G.; Montani, L.; Scaf, R.
2006-04-01
The limiting factors for the scintigraphic clinical application are related to i) biosource characteristics (pharmacokinetic of the drug distribution between organs), Detection chain (photons transport, scintillation, analog to digital signal conversion, etc.) Imaging (Signal to Noise ratio, Spatial and Energy Resolution, Linearity etc) In this work, by using Monte Carlo time resolved transport simulations on a mathematical phantom and on a small field of view scintigraphic device, the trade off between the aforementioned factors was preliminary investigated.
4. Two-Dimensional Radiation Transport in Cylindrical Geometry: Ray-Tracing Compared to Monte Carlo Solutions for a Two-Level Atom
Apruzese, J. P.; Giuliani, J. L.
2008-11-01
Radiation plays a critical role in the dynamics of Z-pinch implosions. Modeling of Z-pinch experiments therefore needs to include an accurate but efficient algorithm for photon transport. Such algorithms exist for the one-dimensional (1D) approximation. In the present work, we report progress toward this goal in a 2D (r,z) geometry, intended for use in radiation hydrodynamics calculations of dynamically evolving Z pinches. We have tested a radiation transport algorithm that uses discrete ordinate sets for the ray in 3-space, and the multifrequency integral solution along each ray. The published solutions of Avery et al. [1] for the line source functions are used as a benchmark to ensure the accuracy of our approach. We discuss the coupling between the radiation field and kinetics that results in large departures from LTE, ruling out use of the diffusion approximation. [1] L. W. Avery, L. L. House, and A. Skumanich, JQSRT 9, 519 (1969).
5. Weak second-order splitting schemes for Lagrangian Monte Carlo particle methods for the composition PDF/FDF transport equations
SciTech Connect
Wang Haifeng Popov, Pavel P.; Pope, Stephen B.
2010-03-01
We study a class of methods for the numerical solution of the system of stochastic differential equations (SDEs) that arises in the modeling of turbulent combustion, specifically in the Monte Carlo particle method for the solution of the model equations for the composition probability density function (PDF) and the filtered density function (FDF). This system consists of an SDE for particle position and a random differential equation for particle composition. The numerical methods considered advance the solution in time with (weak) second-order accuracy with respect to the time step size. The four primary contributions of the paper are: (i) establishing that the coefficients in the particle equations can be frozen at the mid-time (while preserving second-order accuracy), (ii) examining the performance of three existing schemes for integrating the SDEs, (iii) developing and evaluating different splitting schemes (which treat particle motion, reaction and mixing on different sub-steps), and (iv) developing the method of manufactured solutions (MMS) to assess the convergence of Monte Carlo particle methods. Tests using MMS confirm the second-order accuracy of the schemes. In general, the use of frozen coefficients reduces the numerical errors. Otherwise no significant differences are observed in the performance of the different SDE schemes and splitting schemes.
6. Modification to the Monte Carlo N-Particle (MCNP) Visual Editor (MCNPVised) to Read in Computer Aided Design (CAD) Files
SciTech Connect
Randolph Schwarz; Leland L. Carter; Alysia Schwarz
2005-08-23
Monte Carlo N-Particle Transport Code (MCNP) is the code of choice for doing complex neutron/photon/electron transport calculations for the nuclear industry and research institutions. The Visual Editor for Monte Carlo N-Particle is internationally recognized as the best code for visually creating and graphically displaying input files for MCNP. The work performed in this grant was used to enhance the capabilities of the MCNP Visual Editor to allow it to read in both 2D and 3D Computer Aided Design (CAD) files, allowing the user to electronically generate a valid MCNP input geometry.
7. Fast Monte Carlo for radiation therapy: the PEREGRINE Project
SciTech Connect
Hartmann Siantar, C.L.; Bergstrom, P.M.; Chandler, W.P.; Cox, L.J.; Daly, T.P.; Garrett, D.; House, R.K.; Moses, E.I.; Powell, C.L.; Patterson, R.W.; Schach von Wittenau, A.E.
1997-11-11
The purpose of the PEREGRINE program is to bring high-speed, high- accuracy, high-resolution Monte Carlo dose calculations to the desktop in the radiation therapy clinic. PEREGRINE is a three- dimensional Monte Carlo dose calculation system designed specifically for radiation therapy planning. It provides dose distributions from external beams of photons, electrons, neutrons, and protons as well as from brachytherapy sources. Each external radiation source particle passes through collimator jaws and beam modifiers such as blocks, compensators, and wedges that are used to customize the treatment to maximize the dose to the tumor. Absorbed dose is tallied in the patient or phantom as Monte Carlo simulation particles are followed through a Cartesian transport mesh that has been manually specified or determined from a CT scan of the patient. This paper describes PEREGRINE capabilities, results of benchmark comparisons, calculation times and performance, and the significance of Monte Carlo calculations for photon teletherapy. PEREGRINE results show excellent agreement with a comprehensive set of measurements for a wide variety of clinical photon beam geometries, on both homogeneous and heterogeneous test samples or phantoms. PEREGRINE is capable of calculating >350 million histories per hour for a standard clinical treatment plan. This results in a dose distribution with voxel standard deviations of <2% of the maximum dose on 4 million voxels with 1 mm resolution in the CT-slice plane in under 20 minutes. Calculation times include tracking particles through all patient specific beam delivery components as well as the patient. Most importantly, comparison of Monte Carlo dose calculations with currently-used algorithms reveal significantly different dose distributions for a wide variety of treatment sites, due to the complex 3-D effects of missing tissue, tissue heterogeneities, and accurate modeling of the radiation source.
8. Monte Carlo simulation of light transport in turbid medium with embedded object--spherical, cylindrical, ellipsoidal, or cuboidal objects embedded within multilayered tissues.
PubMed
Periyasamy, Vijitha; Pramanik, Manojit
2014-04-01
Monte Carlo modeling of light transport in multilayered tissue (MCML) is modified to incorporate objects of various shapes (sphere, ellipsoid, cylinder, or cuboid) with a refractive-index mismatched boundary. These geometries would be useful for modeling lymph nodes, tumors, blood vessels, capillaries, bones, the head, and other body parts. Mesh-based Monte Carlo (MMC) has also been used to compare the results from the MCML with embedded objects (MCML-EO). Our simulation assumes a realistic tissue model and can also handle the transmission/reflection at the object-tissue boundary due to the mismatch of the refractive index. Simulation of MCML-EO takes a few seconds, whereas MMC takes nearly an hour for the same geometry and optical properties. Contour plots of fluence distribution from MCML-EO and MMC correlate well. This study assists one to decide on the tool to use for modeling light propagation in biological tissue with objects of regular shapes embedded in it. For irregular inhomogeneity in the model (tissue), MMC has to be used. If the embedded objects (inhomogeneity) are of regular geometry (shapes), then MCML-EO is a better option, as simulations like Raman scattering, fluorescent imaging, and optical coherence tomography are currently possible only with MCML. PMID:24727908
9. Application of MINERVA Monte Carlo simulations to targeted radionuclide therapy.
PubMed
Descalle, Marie-Anne; Hartmann Siantar, Christine L; Dauffy, Lucile; Nigg, David W; Wemple, Charles A; Yuan, Aina; DeNardo, Gerald L
2003-02-01
10. Vesicle Photonics
SciTech Connect
Vasdekis, Andreas E.; Scott, E. A.; Roke, Sylvie; Hubbell, J. A.; Psaltis, D.
2013-04-03
Thin membranes, under appropriate boundary conditions, can self-assemble into vesicles, nanoscale bubbles that encapsulate and hence protect or transport molecular payloads. In this paper, we review the types and applications of light fields interacting with vesicles. By encapsulating light-emitting molecules (e.g. dyes, fluorescent proteins, or quantum dots), vesicles can act as particles and imaging agents. Vesicle imaging can take place also under second harmonic generation from vesicle membrane, as well as employing mass spectrometry. Light fields can also be employed to transport vesicles using optical tweezers (photon momentum) or directly pertrurbe the stability of vesicles and hence trigger the delivery of the encapsulated payload (photon energy).
11. Design and fabrication of hollow-core photonic crystal fibers for high-power ultrashort pulse transportation and pulse compression.
PubMed
Wang, Y Y; Peng, Xiang; Alharbi, M; Dutin, C Fourcade; Bradley, T D; Gérôme, F; Mielke, Michael; Booth, Timothy; Benabid, F
2012-08-01
We report on the recent design and fabrication of kagome-type hollow-core photonic crystal fibers for the purpose of high-power ultrashort pulse transportation. The fabricated seven-cell three-ring hypocycloid-shaped large core fiber exhibits an up-to-date lowest attenuation (among all kagome fibers) of 40 dB/km over a broadband transmission centered at 1500 nm. We show that the large core size, low attenuation, broadband transmission, single-mode guidance, and low dispersion make it an ideal host for high-power laser beam transportation. By filling the fiber with helium gas, a 74 μJ, 850 fs, and 40 kHz repetition rate ultrashort pulse at 1550 nm has been faithfully delivered at the fiber output with little propagation pulse distortion. Compression of a 105 μJ laser pulse from 850 fs down to 300 fs has been achieved by operating the fiber in ambient air. PMID:22859102
12. Dopamine Transporter Single-Photon Emission Computerized Tomography Supports Diagnosis of Akinetic Crisis of Parkinsonism and of Neuroleptic Malignant Syndrome
PubMed Central
Martino, G.; Capasso, M.; Nasuti, M.; Bonanni, L.; Onofrj, M.; Thomas, A.
2015-01-01
Abstract Akinetic crisis (AC) is akin to neuroleptic malignant syndrome (NMS) and is the most severe and possibly lethal complication of parkinsonism. Diagnosis is today based only on clinical assessments yet is often marred by concomitant precipitating factors. Our purpose is to evidence that AC and NMS can be reliably evidenced by FP/CIT single-photon emission computerized tomography (SPECT) performed during the crisis. Prospective cohort evaluation in 6 patients. In 5 patients, affected by Parkinson disease or Lewy body dementia, the crisis was categorized as AC. One was diagnosed as having NMS because of exposure to risperidone. In all FP/CIT, SPECT was performed in the acute phase. SPECT was repeated 3 to 6 months after the acute event in 5 patients. Visual assessments and semiquantitative evaluations of binding potentials (BPs) were used. To exclude the interference of emergency treatments, FP/CIT BP was also evaluated in 4 patients currently treated with apomorphine. During AC or NMS, BP values in caudate and putamen were reduced by 95% to 80%, to noise level with a nearly complete loss of striatum dopamine transporter-binding, corresponding to the burst striatum pattern. The follow-up re-evaluation in surviving patients showed a recovery of values to the range expected for Parkinsonisms of same disease duration. No binding effects of apomorphine were observed. By showing the outstanding binding reduction, presynaptic dopamine transporter ligand can provide instrumental evidence of AC in Parkinsonism and NMS. PMID:25837755
13. Updated version of the DOT 4 one- and two-dimensional neutron/photon transport code
SciTech Connect
1982-07-01
DOT 4 is designed to allow very large transport problems to be solved on a wide range of computers and memory arrangements. Unusual flexibilty in both space-mesh and directional-quadrature specification is allowed. For example, the radial mesh in an R-Z problem can vary with axial position. The directional quadrature can vary with both space and energy group. Several features improve performance on both deep penetration and criticality problems. The program has been checked and used extensively.
14. Ensemble Monte Carlo analysis of subpicosecond transient electron transport in cubic and hexagonal silicon carbide for high power SiC-MESFET devices
Belhadji, Youcef; Bouazza, Benyounes; Moulahcene, Fateh; Massoum, Nordine
2015-05-01
In a comparative framework, an ensemble Monte Carlo was used to elaborate the electron transport characteristics in two different silicon carbide (SiC) polytypes 3C-SiC and 4H-SiC. The simulation was performed using three-valley band structure model. These valleys are spherical and nonparabolic. The aim of this work is to forward the trajectory of 20,000 electrons under high-flied (from 50 kV to 600 kV) and high-temperature (from 200 K to 700 K). We note that this model has already been used in other studies of many Zincblende or Wurtzite semiconductors. The obtained results, compared with results found in many previous studies, show a notable drift velocity overshoot. This last appears in subpicoseconds transient regime and this overshoot is directly attached to the applied electric field and lattice temperature.
15. Retinoblastoma external beam photon irradiation with a special ‘D’-shaped collimator: a comparison between measurements, Monte Carlo simulation and a treatment planning system calculation
Brualla, L.; Mayorga, P. A.; Flühs, A.; Lallena, A. M.; Sempau, J.; Sauerwein, W.
2012-11-01
Retinoblastoma is the most common eye tumour in childhood. According to the available long-term data, the best outcome regarding tumour control and visual function has been reached by external beam radiotherapy. The benefits of the treatment are, however, jeopardized by a high incidence of radiation-induced secondary malignancies and the fact that irradiated bones grow asymmetrically. In order to better exploit the advantages of external beam radiotherapy, it is necessary to improve current techniques by reducing the irradiated volume and minimizing the dose to the facial bones. To this end, dose measurements and simulated data in a water phantom are essential. A Varian Clinac 2100 C/D operating at 6 MV is used in conjunction with a dedicated collimator for the retinoblastoma treatment. This collimator conforms a ‘D’-shaped off-axis field whose irradiated area can be either 5.2 or 3.1 cm2. Depth dose distributions and lateral profiles were experimentally measured. Experimental results were compared with Monte Carlo simulations’ run with the penelope code and with calculations performed with the analytical anisotropic algorithm implemented in the Eclipse treatment planning system using the gamma test. penelope simulations agree reasonably well with the experimental data with discrepancies in the dose profiles less than 3 mm of distance to agreement and 3% of dose. Discrepancies between the results found with the analytical anisotropic algorithm and the experimental data reach 3 mm and 6%. Although the discrepancies between the results obtained with the analytical anisotropic algorithm and the experimental data are notable, it is possible to consider this algorithm for routine treatment planning of retinoblastoma patients, provided the limitations of the algorithm are known and taken into account by the medical physicist and the clinician. Monte Carlo simulation is essential for knowing these limitations. Monte Carlo simulation is required for optimizing the treatment technique and the dedicated collimator.
16. Theoretical and experimental investigations of asymmetric light transport in graded index photonic crystal waveguides
SciTech Connect
Giden, I. H. Yilmaz, D.; Turduev, M.; Kurt, H.; Çolak, E.; Ozbay, E.
2014-01-20
To provide asymmetric propagation of light, we propose a graded index photonic crystal (GRIN PC) based waveguide configuration that is formed by introducing line and point defects as well as intentional perturbations inside the structure. The designed system utilizes isotropic materials and is purely reciprocal, linear, and time-independent, since neither magneto-optical materials are used nor time-reversal symmetry is broken. The numerical results show that the proposed scheme based on the spatial-inversion symmetry breaking has different forward (with a peak value of 49.8%) and backward transmissions (4.11% at most) as well as relatively small round-trip transmission (at most 7.11%) in a large operational bandwidth of 52.6 nm. The signal contrast ratio of the designed configuration is above 0.80 in the telecom wavelengths of 1523.5–1576.1 nm. An experimental measurement is also conducted in the microwave regime: A strong asymmetric propagation characteristic is observed within the frequency interval of 12.8 GHz–13.3 GHz. The numerical and experimental results confirm the asymmetric transmission behavior of the proposed GRIN PC waveguide.
17. Theoretical and experimental investigations of asymmetric light transport in graded index photonic crystal waveguides
Giden, I. H.; Yilmaz, D.; Turduev, M.; Kurt, H.; ?olak, E.; Ozbay, E.
2014-01-01
To provide asymmetric propagation of light, we propose a graded index photonic crystal (GRIN PC) based waveguide configuration that is formed by introducing line and point defects as well as intentional perturbations inside the structure. The designed system utilizes isotropic materials and is purely reciprocal, linear, and time-independent, since neither magneto-optical materials are used nor time-reversal symmetry is broken. The numerical results show that the proposed scheme based on the spatial-inversion symmetry breaking has different forward (with a peak value of 49.8%) and backward transmissions (4.11% at most) as well as relatively small round-trip transmission (at most 7.11%) in a large operational bandwidth of 52.6 nm. The signal contrast ratio of the designed configuration is above 0.80 in the telecom wavelengths of 1523.5-1576.1 nm. An experimental measurement is also conducted in the microwave regime: A strong asymmetric propagation characteristic is observed within the frequency interval of 12.8 GHz-13.3 GHz. The numerical and experimental results confirm the asymmetric transmission behavior of the proposed GRIN PC waveguide.
18. PENEPMA: a Monte Carlo programme for the simulation of X-ray emission in EPMA
Llovet, X.; Salvat, F.
2016-02-01
The Monte Carlo programme PENEPMA performs simulations of X-ray emission from samples bombarded with electron beams. It is both based on the general-purpose Monte Carlo simulation package PENELOPE, an elaborate system for the simulation of coupled electron-photon transport in arbitrary materials, and on the geometry subroutine package PENGEOM, which tracks particles through complex material structures defined by quadric surfaces. In this work, we give a brief overview of the capabilities of the latest version of PENEPMA along with several examples of its application to the modelling of electron probe microanalysis measurements.
19. MCMini: Monte Carlo on GPGPU
SciTech Connect
Marcus, Ryan C.
2012-07-25
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
20. Monte Carlo treatment planning with modulated electron radiotherapy: framework development and application
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8135507702827454, "perplexity": 2329.0953563981843}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860117244.29/warc/CC-MAIN-20160428161517-00012-ip-10-239-7-51.ec2.internal.warc.gz"}
|
http://iemsjl.org/journal/article.php?code=58080
|
• Editorial Board +
• For Contributors +
• Journal Search +
Journal Search Engine
ISSN : 1598-7248 (Print)
ISSN : 2234-6473 (Online)
Industrial Engineering & Management Systems Vol.16 No.4 pp.590-597
DOI : https://doi.org/10.7232/iems.2017.16.4.590
# Optimal Maintenance Model Based on Maintenance-Free Operating Period for Multi-Component Weapon Systems
Seongmin Moon*, Jinho Lee
Dongjak-gu, Seoul, Republic of Korea
Dept. of Management Science, Korea Naval Academy, Changwon, Republic of Korea
Corresponding Author, [email protected]
20160813 20171101 20170905
## ABSTRACT
Breakdown outages and maintenance interval are critical factors in determining lifetime operating costs for a weapon system. The maintenance-free operating period (MFOP) traces the reliability of a weapon system throughout the life of the system. MFOP might be appropriate to a weapon system such as a missile launcher if the requirement of a high level of reliability for the system is considered. This paper proposes a two-step maintenance planning model based on MFOP in estimating the cost of breakdown outages. The first step is optimising lifetime operating costs, including breakdown and preventive maintenance costs. The second step is grouping the maintenance activities in order to save set-up costs. A numerical experiment for a missile launcher using data obtained from the South Korean navy identified that the optimal intervals from MFOP tended to be shorter than those from traditional system reliability; the grouping policy minimizes the lifetime operating costs.
## 1.INTRODUCTION
Owing to the global pressure on defence budgets, military forces need to sustain operational availability at a required level and have to reform maintenance policies (Ahmadi et al., 2009). Under this circumstance, one of the most significant issues to be considered when deploying a weapon system might be the optimization of lifetime operating costs by determining suitable maintenance intervals. The majority of the defence standards, such as MIL-HDBK-217F, MIL-STD-1388, and MIL-STD-2173, utilize mean time between failures (MTBF) for planning maintenance. MTBF cannot be used to model age-related failure mechanisms owing to the memoryless property of the exponential distribution, which does not include a time-variant in spite of its theoretical ease of use. Maintenance models based on MTBF admit that failure cannot be evaded and lead to corrective maintenance (Long et al., 2009). In military establishments the penalty costs caused by corrective maintenance can greatly outweigh the preventive maintenance costs (Moon et al., 2012). For example, a long preventive maintenance interval for saving $100 preventive maintenance cost could cause corrective maintenance that could lead to a$100 million warship to be non-operational. This could result in a military defeat that could cause casualties and deaths.
A maintenance-free operating period (MFOP) can be defined as “a period of operation during which an item will be able to carry out all its assigned missions, without the operator being restricted in any way due to system faults or limitations, with the minimum of maintenance” (Kumar et al., 1999, p. 128). MFOP survivability (MFOPS) refers to the probability that the item will survive for the duration of MFOP, as shown in Equation (1) (Kumar et al., 1999).
$MFOPS i = R i ( I i + t ) R i ( I i )$
(1)
where, MFOPSi = the probability that component i will survive when it is up at time Ii+t, given that it survives at Ii; Ri = reliability of component i; Ii = maintenance interval for component i.
MFOP can be employed to analyze age-related failure mechanisms and can help improve the reliability and design of the system (Kumar et al., 1999). This decreases the uncertainty present in maintenance planning and allows logistics managers to focus on selective systems (Cini and Griffith, 1999; Wu et al., 2004). MFOP has been used in many projects, such as future offensive aircraft systems, joint strike fighter, ultra-reliable aircraft, and BAe Airbus aircraft (Cini and Griffith, 1999; Kumar, 1999; Relf, 1999). Several authors (Kumar et al., 1999; Long et al., 2009) have presented maintenance models based on MFOPS. MFOPS is a useful concept for a hightech weapon system. This is because MFOPS allows logistics manager to trace its failure mechanism and warranties its survivability expressed as a specified probability during the military operations. In spite of the usefulness of MFOPS, however, far too little attention has been paid to the development of a practical optimization model determining lifetime operating costs and maintenance intervals for a multi-component system based on MFOPS. The objective of this paper is to present a two-step maintenance planning model for multi-component weapon systems based on MFOPS.
The remainder of this paper is organized as follows. Section 2 reviews the theoretical framework for MFOP and optimization models for lifetime operating costs. Section 3 develops the maintenance planning model. This is followed in Section 4 by the results and analysis using data obtained from the South Korean navy, and insights for practitioners. Finally, Section 5 presents the concluding remarks.
## 2.LITERATURE REVIEW
This section summarizes the research that has investigated MFOP and maintenance optimization models. MFOP is a reliability measure used by the Ministry of Defence, UK (Kumar, 1999). MFOP requires the manufacturers to analyze the failure mechanism and to improve the reliability and the design of the system. This is because MFOP can trace the failures of the system throughout its lifetime (Kumar et al., 1999). The main idea of MFOP is the elimination of un-planned maintenance for the period of operations (Relf, 1999).
A considerable body of research has proposed a variety of maintenance optimization models. Table 1 compares sixteen major studies that have examined maintenance optimization models. The Weibull failure rate function was the most commonly used failure mechanism as six papers adopted the Weibull failure rate.
Most of the researchers cited in Table 1 provided models minimizing the total costs occurring throughout the lifetime of the systems. Ten studies considered breakdown costs. The majority of the literature employed the preventive maintenance interval (or time) as a decision variable. Eight studies looked at multi-component systems. Some researchers (Horenbeek and Pintelon, 2013) considered grouping maintenance activities.
Previous research (Kumar et al., 1999, Long et al., 2009) also has suggested the concept of MFOPS which might be suitable for high-tech weapon systems in view of their requirement of high reliability. Kumar et al. (1999) proposed a mathematical model to predict MFOPS for a multi-component system. Recently, the concept of MFOPS has been developed by Long et al. (2009) as a maintenance cost calibration model for a single unit system. However, to the best of our knowledge, no attempt has been made to optimize maintenance interval and lifetime operating costs based on MFOPS for a multi- component system. The present work is designed to be the first to develop the maintenance optimization model based on MFOPS for a multi-component system. This paper also demonstrates some empirical evidence to support the performance of the maintenance optimization model.
## 3.THE DEVELOPMENT OF A TWO-STEP MAINTENANCE PLANNING MODEL
This section describes the two-step maintenance planning model for high-tech weapon systems such as a missile launcher in a warship. We assumed that the failures of the component in the missile launcher are independent of each other, and multiple failures between maintenance activities can be occurred. The missile launcher is required to launch missiles at any time before its major overhaul made in the Naval repair shop. In order to be kept in operational condition before the major overhauls, the missile launcher needs to be maintained periodically. Adapted from the model of Tam et al. (2006), the first step minimizing the lifetime operating costs can be formulated as an integer program, as shown in Equation (2).
Index and Set
• i ϵ I : set of item i (1 ≤ i ≤ n, integer).
Decision Variable
• Ii : maintenance interval for component i (1 ≤ Ii ≤ 1,095, integer).
Parameters
• L: the lifetime period of the system before a major overhaul;
• CD: downtime cost per unit period (i.e. day);
• MFOPSs : the mean MFOPS of the system over the lifetime period before a major overhaul;
• CM,i : the unit preventive maintenance cost for component i.
(2)
According to the definition of MFOP presented in Section 1, we did not consider the corrective maintenance. The preventive maintenance indicates the replacement of line replaceable unit (LRU) made on board. We assumed the LRU become as good as new after the replacement, and the weapon system become as good as new after a major overhaul. The replacement or repair of shop replaceable unit (SRU) made in the Naval repair shop was beyond the scope of this study. A field survey identified that the replacement times for LRU for the missile launcher were less than 30 minutes. 30 minutes might be a short time period compared to operating period (e.g. 14 days). Hence, we assumed that the replacement time does not affect the survivability of the weapon system. MFOPSi seems to be a function of Ii and t as shown in Equation (2). In order to describe the maintenance model with periodic preventive maintenance, we here define MFOPSi as the probability that component i will survive when it is up at time Ii + Ii, given that it survives at Ii.
MFOPS decreases by the end of a maintenance interval and was assumed to become as good as new after the replacement, as shown in Figure 1. In order to compute MFOPS over the lifetime period, we employed the mean MFOPS.
Under the assumption of a multi-component system in series with the Weibull failure rate function such that $MFOPS i = exp [ ( I i β i − ( I i + I i ) β i ) / η i β i ]$, where we considered the MFOPS of each component i by the very next maintenance interval from the current one, MFOPSs can be expressed as the product of all MFOPSt, and furthermore, it can be approximated as shown in Equation (3). The total downtime is a period of the lifetime during which the system will not be able to carry out its assigned missions before a major overhaul. The total downtime can be estimated as L • (1−MFOPSs) .
$MFOPS ¯ s = ∏ i = 1 n MFOPS ¯ i = ∏ i = 1 n 1 I i ⋅ ∫ 0 I i exp [ t b i − ( 2 t ) β i η i β i ] d t ≈ ∏ i = 1 n 1 I i ⋅ { ∑ k = 1 I i exp [ k β i − ( 2 k ) β i η i β i ] } ∏ i = 1 n 1 I i ⋅ { ∑ k = 1 I i exp [ 1 − 2 β i η i β i k β i ] }$
(3)
If we let $f i ( I i ) = ( ∑ k = 1 I i exp [ γ i k β i ] ) / I i$ where $γ i = ( 1 − 2 β i ) / η i β i$ which is a constant, Equation (2) is (approximately) equivalent to minimizing $− C D ∏ i = 1 n f i ( I i ) + ∑ i = 1 n C M,i / I i$. Note that $γ i ≤ 0$ for $β i ≥ 0$ and thus fi is monotonically decreasing in Ii since $f i ( k 1 ) ≥ f i ( k 2 )$ for any $0 ≤ k 1 ≤ k 2$ where k1 and k2 are integers. Because of this, the first term $( − C D ∏ i = 1 n f i ( I i ) )$ decreases, whereas the second term $∑ i = 1 n ( C M, i /I i )$ does so as Ii increases. Therefore, the two terms of the objective function conflict with each other, and we may take an optimal interval achieved at some point which is suboptimal for both but optimal for their combination. As stated in Section 1, the breakdown outages cost can greatly outweigh the preventive maintenance cost. This implies that CD ≫ CM,i for all i , and would make some bias on an optimal interval more closely to the consideration of the breakdown outages cost. Before dealing with a multi-component system, we first discuss an optimal interval for a single component. When considering only a single component, the objective becomes:
$Z i = minimize C M,i I i − C D f i ( I i )$
(4)
The solution strategy for this objective is identical to the one that Tam et al. (2006) used: to plot the two values of preventive maintenance cost and breakdown outages cost independently on a grid and take the sum of these two values to obtain an optimal interval that presents the minimum value of the objective. For each component i, taking its optimal interval, $I i *$, that achieves objective (4) might not ensure an optimality for objective (2) under a multi-component system. A heuristic approach, similar to the marginal analysis used by Sherbrooke (2004), can be applied to this problem as shown in objective (5). In order to find the marginal or incremental value for each component, the lifetime operating cost of the item is divided by its unit preventive maintenance cost. However, as a practical approach, we solved objective (4) for each single component and used that solution for the multicomponent model. The interdependency of the maintenance intervals for the components in the system was considered in the second step.
$Z = minimize ∑ i = 1 n 1 C M, i · [ C M, i I i − C D f i ( I i ) ]$
(5)
The second step, adopting the formula of Horenbeek and Pintelon (2013), is to group maintenance activities to save set-up costs. The lifetime operating costs, taking into account preventive maintenance costs including the component- dependent maintenance cost and the systemdependent maintenance cost (i.e. set-up cost), can be written as shown in Equation (6):
$L [ C D · ( 1 − ∑ i = 1 n f i ( I ^ i ) ) + ∑ i = 1 n C M, i / I ^ i ] + ∑ j = 1 m a j S j$
(6)
where
• $I ^ i$ : a modified interval by grouping maintenance activities;
• aj : the number of maintenance activities for group j during the lifetime period, L, before a major overhaul;
• Sj : the set-up cost for group j.
Without grouping components, the lifetime operating costs can be described as shown in Equation (7). Since each component i was included in only one set of Gj, SCi was well-defined. The costs according to Equation (6) were hypothesised to be lower than the costs according to Equation (7), owing to the suboptimal interval for each component achieved without grouping maintenance activities. The next section provides the results from some computational experiments based on a multi-component weapon system of the Korean navy.
$L [ C D · ( 1 − ∏ i = 1 n f i ( I i * ) ) + ∑ i = 1 n ( C M,i + S C i ) / I i * ]$
(7)
where SCi = Sj, for iGj (set of group j).
## 4.REAL-LIFE EXAMPLE AND ANALYSIS
This section presents the failure rates fitted to the failure data of the components for a weapon system, discusses some computational studies using the fitting results, and provides insights for practitioners. The failure history for the seventeen components for the missile launcher of a specific type of warship in the Korean navy before a major overhaul (1,095 days), used by Moon and Lee (2017), was analyzed. The Weibull failure rate function $( λ ( t ) = β η ( t η ) β − 1 , t ≥ 0 )$ was assumed to fit the failure data for this research. Table 2 presents the results of fitting the Weibull to the failure records of the components and their material costs. The values of β as shown in Table 2 indicate that 1095 days might span a useful-life or early wear-out stage for the components with slightly increasing failure rates (Abernethy, 2001; Rausand and Hoyland, 2004).
Maintenance costs are composed of various cost elements, for example material costs plus maintenance man-hours multiplied by labour rate (Wu et al., 2004). However, for simplicity, this paper considered that material cost is identical to preventive maintenance cost, CM,i. This was because a field survey identified that material cost constituted the largest element of maintenance costs. The material costs are provided as shown in Table 2. A breakdown outages cost was calculated as $C D = c ( ∑ i = 1 n C M,i )$ where c is a constant of our choice, to consider the cost of breakdown outages compared to the mean preventive cost, $∑ i = 1 n C M,i / n$.
As the first step, for each component i, we examined an optimal interval that provided a minimum value of the lifetime operating costs by plotting those two values on the two-dimensional grid. For instance, an optimal interval of Component 1 ($I 1 *$) with CM,1 = $9,598 for the case where c = 5 (i.e. CD =$36,951 ) was calculated as 16 days. Similarly, optimal intervals for all components were obtained as shown in Table 3. As mentioned in Section 3, for a fixed value of CD, as the interval grew larger, the preventive maintenance cost decreased, whereas the breakdown outages cost increased.
A high breakdown outages cost reduced the interval, because more frequent preventive maintenance made a component stay in the “up” state and reduced the chance of a component being in the “down” state, which was relatively costly, owing to the breakdown outages cost. We also compared the model with MFOPS to one with traditional system reliability (Tam et al., 2006), as shown in Table 3. Traditional system reliability, R(t), can be defined as $R ( t ) = P [ T > t ] = exp [ − ( t / η ) β ]$ based on the Weibull function. In general, optimal intervals obtained via MFOPS were shorter than those obtained via traditional system reliability owing to the consideration of conditional probability involved in MFOPS. This pattern was distinct when maintenance costs were very low (e.g. #6, #7, and #15). The shorter optimal intervals from MFOPS might imply a higher probability of survival and a higher requirement for maintenance (resulting in higher lifetime operating costs) compared to those obtained via traditional system reliability (Croker, 1997). For example, for c = 5, the lifetime operating cost using the MFOPS ($15.04 million) was higher than that using traditional system reliability ($13.84 million).
We further examined how sensitively an optimal interval decreases as c increases. Taking preventive maintenance costs into consideration, Components 6, 1, and 12 were compared. This was because, while their Weibull parameters showed similar characteristics, their preventive maintenance costs were distinguishable (i.e. CM,6 = $154, CM,1 =$9,598, and CM,12 = $21,232). Figure 2 illustrates the influence of preventive maintenance costs on determining optimal intervals. With a lower CM,i (e.g. Component 6), the corresponding optimal interval decreased slowly. However, a higher preventive maintenance cost (e.g. Component 12) resulted in a relatively rapid decrease. The relative breakdown outages cost had a more sensitive effect on the optimal maintenance interval of a component having a higher preventive maintenance cost than on that of a component having a lower preventive maintenance cost, as shown in Figure 2. Managers might have to concentrate more on a component having a high preventive maintenance cost and should be more careful to determine the relative costs of breakdown outages and preventive maintenance for the component. As the second step, the components were grouped according to their maintenance intervals determined at the first step. Two distinct groups of components, Gj, j = 1, 2 (semi-monthly and monthly intervals), were considered. With the breakdown outages cost $C D = c ( ( 1 / n ) ∑ i = 1 n C M, i )$ where c = 3, the optimal intervals based on MFOPS were considered for groupings in such a way that an optimal interval for each component can be gathered to the closest period between 15 days and 30 days, that is, G1 = {1, 2, 3, 4, 5, 6, 7, 8, 10, 11, 13, 15, 16, 17}, and G1 = {9, 12, 14}. Then, the maintenance interval of each component was modified to fit into one of the two groups, that is, $I ^ i$ =15 days,$∀ i ∈ G 1$, and $I ^ i$ = 30 days,$∀ i ∈ G 2$. With the setup costs set at S1 = S2 =$10,000, grouping at the second step outperformed non-grouping intervals, as shown in Table 4. This might be because, although a higher breakdown outages cost was incurred under grouping policy owing to the lower MFOPS, merging maintenance activities into a group reduced the corresponding maintenance costs by more than the additionally incurred breakdown outages cost.
Grouping maintenance activities can be claimed to reduce the lifetime operating costs before major overhaul as shown in Table 4. However, as the grouping increased the breakdown outages, the grouping policy might not be suitable for a system requiring a lower chance of breakdown outages (i.e. higher MFOPS) than that determined by the grouping. In that case, managers should shorten the grouping maintenance intervals in order to sustain the required MFOPS. The grouping policy might be regarded as being useful when it satisfies the required MFOPS for users.
## 5.CONCLUSIONS
Maintenance-free operating period survivability (MFOPS) can analyze age-related failure mechanisms and can help improve the reliability and design of the system. This paper suggests a practical two-step maintenance planning model based on MFOPS. At the first step, the lifetime operating costs incurred by breakdown and preventive maintenance for a multi-component weapon system assumed to be in series are minimized. This provides an optimal maintenance interval for each component of the system. The second step integrates individual preventive maintenance activities for components of a weapon system into temporal groups in order to reduce set-up costs. Therefore, the research gap, that is, the lack of a practical maintenance optimization model for a multi-component system based on MFOPS, might be claimed to be filled.
This study has found that MFOPS has a tendency to require a shorter optimal preventive maintenance interval than traditional system reliability. This implies a higher probability of survival for the system and a higher requirement for maintenance (resulting in higher lifetime operating costs) compared to system reliability. The second major finding was that the relative breakdown outages cost had a greater effect on the optimal maintenance interval of a component having a higher preventive maintenance cost than on that of a component having a lower preventive maintenance cost. This suggests that managers need to focus more on a component with a high preventive maintenance cost and should pay particular attention to deciding the relative cost of breakdown outages and preventive maintenance for the component. The third important finding was that grouping maintenance activities could reduce the lifetime operating costs. However, the advantage of the grouping policy can be offset by the increasing breakdown outages, resulting in a lower MFOPS. This suggests that managers might have to extend the intervals for grouped maintenance activities in order to sustain the required MFOPS.
This research has several theoretical and practical contributions. Firstly, this is the first time that MFOP, which is suitable for high-tech weapon systems in view of their requirement of high reliability, has been used to optimize maintenance interval and lifetime operating costs for a multi-component system. Secondly, the present study provides empirical evidence with respect to the performance of the maintenance optimization model. Thirdly, as stated above, this research provides managers with some practical guidance for the use of the lifetime operating costs optimization model.
A limitation of this study relates to the fact that the results of fitting the Weibull function to the failure history of the components used for this study seemed not to show an obvious age-related failure mechanism. The β values close to 1 as shown in Table 2 imply an almost constant failure rate. There might not be a significant advantage in adopting MFOP rather than MTBF in this case. Components having values of β which are far from 1 would gain more benefit from adopting MFOP. Further work needs to be done to identify the advantages of MFOP compared to MTBF and this will be done with data presenting an obvious age-related failure mechanism.
## Figure
MFOPS over the lifetime period, where Ii = 500, β = 2, η = 2000 (Long et al., 2009).
Comparisons of optimal intervals determined by preventive maintenance costs.
## Table
A review of maintenance optimization models
F = failure rate function; W = Weibull; In = increasing failure rate; De = decreasing failure rate; Ex = exponential; G = Gamma; N = Normal; B = Beta; Min = minimize; Max = maximize; PM = preventive maintenance; CM = corrective maintenance; SS = system structure; MC = multi-component system; SU = single-unit system.
Failure data fitting results and material costs (Moon and Lee, 2017)
Optimal intervals of each component for various values of CD=c((1/n)∑i=1nCM,i)
Shorter optimal intervals are shown in bold.
Comparison of the costs with and without consideration of the set-up cost
## REFERENCES
1. AbernethyR.B. (2000) The new Weibull handbook: Reliability & statistical analysis for predicting life, safety, survivability, risk, cost and warranty claims., Robert B. Abernethy,
2. AhmadiA. FranssonT. CronaA. KleinM. SoderholmP. (2009) Integration of RCM and PHM for the next generation of aircraft , Proceedings of the 2009 IEEE Aerospace Conference, Big Sky,
3. BeaurepaireP. ValdebenitoM.A. SchuellerG.I. JensenH.A. (2012) Reliability-based optimization of maintenance scheduling of mechanical components under fatigue. , Computer Methods in Applied Mechanics and Engineering, Vol.221-222 ; pp.24-40
4. BrisR. ByczanskiP. (2013) Effective computing algorithm for maintenance optimization of highly reliable systems. , Reliab. Eng. Syst. Saf., Vol.109 ; pp.77-85
5. CiniP.F. GriffithP. (1999) Designing for MFOP: towards the autonomous aircraft. , J. Qual. Mainten. Eng., Vol.5 (4) ; pp.296-306
6. CrokerJ. (1997) Maintenance free operating period - is this the way forward , Proceedings of the 7th International MIRCE Symposium,
7. HorenbeekA.V. PintelonL. (2013) A dynamic predictive maintenance policy for complex multi-component systems. , Reliab. Eng. Syst. Saf., Vol.120 ; pp.39-50
8. JiangY. Mc CalleyJ.D. VoorhisT.V. (2006) Risk-based resource optimization for transmission system maintenance. , IEEE Trans. Power Syst., Vol.21 (3) ; pp.1191-1200
9. KhatabA. AghezzafE.H. (2016) Selective maintenance optimization when quality of imperfect maintenance actions are stochastic. , Reliab. Eng. Syst. Saf., Vol.150 ; pp.182-189
10. KumarU.D. (1999) New trends in aircraft reliability and maintenance measures. , J. Qual. Mainten. Eng., Vol.5 (4) ; pp.287-295
11. KumarU.D. KnezevicJ. CrockerJ. (1999) Maintenance free operating period: An alternative measure to MTBF and failure rate for specifying reliability? , Reliab. Eng. Syst. Saf., Vol.64 (1) ; pp.127-131
12. LinD. ZuoM. YamR. MengM.Q. (2000) Optimal system design considering warranty, periodic preventive maintenance, and minimal repair. , J. Oper. Res. Soc., Vol.51 (7) ; pp.869-874
13. LongJ. ShenoiR.A. JiangW. (2009) A reliability centered maintenance strategy based on maintenance-free operating period philosophy and total lifetime operating cost analysis. , Proceedings of the Institution of Mechinical Engineering, Part G: Journal of Aerospace Engineering,
14. MaatoukI. ChebboN. JarkassI. ChateletE. (2016) Maintenance optimization using combined fuzzy genetic algorithm and local search. , International Federation of Automatic Control Papers On Line, Vol.49 (12) ; pp.757-762
15. MoonS. HicksC. SimpsonA. (2012) The development of a hierarchical forecasting method for predicting spare parts demand in the South Korean Navy: A case study. , Int. J. Prod. Econ., Vol.140 (2) ; pp.794-802
16. MoonS. LeeJ. (2017) Optimising concurrent spare parts inventory levels for warships under dynamic conditions. , Industrial Engineering & Management Systems, Vol.16 (1) ; pp.52-63
17. PanagiotidouS. TagarasG. (2007) Optimal preventive maintenance for equipment with two quality states and general failure time distributions. , Eur. J. Oper. Res., Vol.180 (1) ; pp.329-353
18. PascualR. OrtegaJ.H. (2006) Optimal replace and overhaul decisions with imperfect maintenance and warranty contracts. , Reliab. Eng. Syst. Saf., Vol.91 (2) ; pp.241-248
19. RausandM. HoylandA. (2004) System reliability theory: models, statistical methods, and applications, John Wiley & Son,
20. RelfM.N. (1999) Maintenance-free operating periodsthe designer ?(tm)s challenge. , Qual. Reliab. Eng. Int., Vol.15 ; pp.111-116
21. SherbrookeC.C. (2004) Optimal inventory modeling of systems: multi-echelon techniques., Kluwer Academic Publishers,
22. ShirmohammadiA.H. ZhangZ.G. LoveE. (2007) A computational model for determining the optimal preventive maintenance policy with random breakdowns and imperfact repairs. , IEEE Trans. Reliab., Vol.56 (2) ; pp.332-339
23. TamA.S. ChanW.M. PriceJ.W. (2006) Optimal maintenance intervals for a multi-component system. , Prod. Plann. Contr., Vol.17 (8) ; pp.769-779
24. VassiliadisC.G. PistikopoulosE.N. (2001) Maintenance scheduling and process optimization under uncertainty. , Comput. Chem. Eng., Vol.25 (2-3) ; pp.217-236
25. WangW. (2012) A stochastic model for joint spare parts inventory and planned maintenance optimization. , Eur. J. Oper. Res., Vol.216 (1) ; pp.127-139
26. WuH. LiuY. DingY. LiuJ. (2004) Method to reduce direct maintenance costs for commercial aircraft. , Aircr. Eng. Aerosp. Technol., Vol.76 (1) ; pp.15-18
27. WuS. Clements-CroomeD. (2005) Optimal maintenance policies under different operational schedules. , IEEE Trans. Reliab., Vol.54 (2) ; pp.338-346
28. YehR.H. LoH.C. (2001) Optimal preventivemaintenance warranty policy for repairable products. , Eur. J. Oper. Res., Vol.134 (1) ; pp.59-69
오늘하루 팝업창 안보기 닫기
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 28, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6236799359321594, "perplexity": 2443.388450524473}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658928.22/warc/CC-MAIN-20190117102635-20190117124635-00624.warc.gz"}
|
https://worldwidescience.org/topicpages/s/sandy+ocean+sediments.html
|
#### Sample records for sandy ocean sediments
1. Geomechanical, Hydraulic and Thermal Characteristics of Deep Oceanic Sandy Sediments Recovered during the Second Ulleung Basin Gas Hydrate Expedition
Directory of Open Access Journals (Sweden)
Yohan Cha
2016-09-01
Full Text Available This study investigates the geomechanical, hydraulic and thermal characteristics of natural sandy sediments collected during the Ulleung Basin gas hydrate expedition 2, East Sea, offshore Korea. The studied sediment formation is considered as a potential target reservoir for natural gas production. The sediments contained silt, clay and sand fractions of 21%, 1.3% and 77.7%, respectively, as well as diatomaceous minerals with internal pores. The peak friction angle and critical state (or residual state friction angle under drained conditions were ~26° and ~22°, respectively. There was minimal or no apparent cohesion intercept. Stress- and strain-dependent elastic moduli, such as tangential modulus and secant modulus, were identified. The sediment stiffness increased with increasing confining stress, but degraded with increasing strain regime. Variations in water permeability with water saturation were obtained by fitting experimental matric suction-water saturation data to the Maulem-van Genuchen model. A significant reduction in thermal conductivity (from ~1.4–1.6 to ~0.5–0.7 W·m−1·K−1 was observed when water saturation decreased from 100% to ~10%–20%. In addition, the electrical resistance increased quasi-linearly with decreasing water saturation. The geomechanical, hydraulic and thermal properties of the hydrate-free sediments reported herein can be used as the baseline when predicting properties and behavior of the sediments containing hydrates, and when the hydrates dissociate during gas production. The variations in thermal and hydraulic properties with changing water and gas saturation can be used to assess gas production rates from hydrate-bearing deposits. In addition, while depressurization of hydrate-bearing sediments inevitably causes deformation of sediments under drained conditions, the obtained strength and stiffness properties and stress-strain responses of the sedimentary formation under drained loading conditions
2. Ocean Sediment Thickness Contours
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — Ocean sediment thickness contours in 200 meter intervals for water depths ranging from 0 - 18,000 meters. These contours were derived from a global sediment...
3. The oceanic sediment barrier
International Nuclear Information System (INIS)
Francis, T.J.G.; Searle, R.C.; Wilson, T.R.S.
1986-01-01
Burial within the sediments of the deep ocean floor is one of the options that have been proposed for the disposal of high-level radioactive waste. An international research programme is in progress to determine whether oceanic sediments have the requisite properties for this purpose. After summarizing the salient features of this programme, the paper focuses on the Great Meteor East study area in the Northeast Atlantic, where most oceanographic effort has been concentrated. The geological geochemical and geotechnical properties of the sediments in the area are discussed. Measurements designed to determine the rate of pore water movement through the sediment column are described. Our understanding of the chemistry of both the solid and pore-water phases of the sediment are outlined, emphasizing the control that redox conditions have on the mobility of, for example, naturally occurring manganese and uranium. The burial of instrumented free-fall penetrators to depths of 30 m beneath the ocean floor is described, modelling one of the methods by which waste might be emplaced. Finally, the nature of this oceanic environment is compared with geological environments on land and attention is drawn to the gaps in our knowledge that must be filled before oceanic burial can be regarded as an acceptable disposal option. (author)
4. Denitrification pathways and rates in the sandy sediments of the Georgia continental shelf, USA
Directory of Open Access Journals (Sweden)
Ingall Ellery
2005-02-01
Full Text Available Denitrification in continental shelf sediments has been estimated to be a significant sink of oceanic fixed nitrogen (N. The significance and mechanisms of denitrification in organic-poor sands, which comprise 70% of continental shelf sediments, are not well known. Core incubations and isotope tracer techniques were employed to determine processes and rates of denitrification in the coarse-grained, sandy sediments of the Georgia continental shelf. In these sediments, heterotrophic denitrification was the dominant process for fixed N removal. Processes such as coupled nitrification-denitrification, anammox (anaerobic ammonium oxidation, and oxygen-limited autotrophic nitrification-denitrification were not evident over the 24 and 48 h time scale of the incubation experiments. Heterotrophic denitrification processes produce 22.8–34.1 μmole N m-2 d-1 of N2 in these coarse-grained sediments. These denitrification rates are approximately two orders of magnitude lower than rates determined in fine-grained shelf sediments. These lower rates may help reconcile unbalanced marine N budgets which calculate global N losses exceeding N inputs.
5. Enhancing the biodegradation of oil in sandy sediments with choline: A naturally methylated nitrogen compound
International Nuclear Information System (INIS)
Mortazavi, Behzad; Horel, Agota; Anders, Jennifer S.; Mirjafari, Arsalan; Beazley, Melanie J.; Sobecky, Patricia A.
2013-01-01
6. Morphosedimentary evolution of carbonate sandy beaches at decadal scale : case study in Reunion Island , Indian Ocean
Science.gov (United States)
Mahabot, Marie-Myriam; Pennober, Gwenaelle; Suanez, Serge; Troadec, Roland; Delacourt, Christophe
2017-04-01
Global change introduce a lot of uncertainties concerning future trajectory of beaches by directly or indirectly modifying major driving factors. An improved understanding of the past shoreline evolution may help for anticipate future coastline response. However, in tropical environment, studies concerning carbonate beaches dynamics are scarce compared to open sandy beaches. Consequently, coral reef protected beaches morphological adjustment is still poorly understood and long-term evolution rate are poorly quantified in these specific environment. In this context, La Reunion Island, insular department of France located in Indian Ocean, constitute a favoured laboratory. This high volcanic island possesses 25 km of carbonate beaches which experience hydrodynamic forcing specific from tropical environment: cyclonic swell during summer and long period swell during winter. Because of degraded coral reef health and high anthropogenic pressure, 50% of the beaches are in erosion since 1970s. Beach survey has been conducted since 1990s by scientist and are now encompassed as pilot site within a French observatory network which guarantee long-term survey with high resolution observational techniques. Thus, La Reunion Island is one of the rare carbonate beach to be surveyed since 20 years. This study aims to examined and quantify beach response at decadal scale on carbonate sandy beaches of Reunion Island. The study focus on 12 km of beaches from Cap Champagne to the Passe de Trois-Bassins. The analyze of 15 beach profile data originated from historical and DGPS beach topographic data confirm long term trend to erosion. Sediment lost varies between 0.5 and 2 m3.yr-1 since 1998. However longshore current have led to accretion of some part of beach compartment with rate of 0.7 to 1.6 m3.yr-1. Wave climate was examined from in-situ measurement over 15 years and show that extreme waves associated with tropical cyclones and long period swell play a major role in beach dynamics
7. Inner-shelf ocean dynamics and seafloor morphologic changes during Hurricane Sandy
Science.gov (United States)
Warner, John C.; Schwab, William C.; List, Jeffrey H.; Safak, Ilgar; Liste, Maria; Baldwin, Wayne
2017-04-01
Hurricane Sandy was one of the most destructive hurricanes in US history, making landfall on the New Jersey coast on October 30, 2012. Storm impacts included several barrier island breaches, massive coastal erosion, and flooding. While changes to the subaerial landscape are relatively easily observed, storm-induced changes to the adjacent shoreface and inner continental shelf are more difficult to evaluate. These regions provide a framework for the coastal zone, are important for navigation, aggregate resources, marine ecosystems, and coastal evolution. Here we provide unprecedented perspective regarding regional inner continental shelf sediment dynamics based on both observations and numerical modeling over time scales associated with these types of large storm events. Oceanographic conditions and seafloor morphologic changes are evaluated using both a coupled atmospheric-ocean-wave-sediment numerical modeling system that covered spatial scales ranging from the entire US east coast (1000 s of km) to local domains (10 s of km). Additionally, the modeled response for the region offshore of Fire Island, NY was compared to observational analysis from a series of geologic surveys from that location. The geologic investigations conducted in 2011 and 2014 revealed lateral movement of sedimentary structures of distances up to 450 m and in water depths up to 30 m, and vertical changes in sediment thickness greater than 1 m in some locations. The modeling investigations utilize a system with grid refinement designed to simulate oceanographic conditions with progressively increasing resolutions for the entire US East Coast (5-km grid), the New York Bight (700-m grid), and offshore of Fire Island, NY (100-m grid), allowing larger scale dynamics to drive smaller scale coastal changes. Model results in the New York Bight identify maximum storm surge of up to 3 m, surface currents on the order of 2 ms-1 along the New Jersey coast, waves up to 8 m in height, and bottom stresses
8. Inner-shelf ocean dynamics and seafloor morphologic changes during Hurricane Sandy
Science.gov (United States)
Warner, John C.; Schwab, William C.; List, Jeffrey; Safak, Ilgar; Liste, Maria; Baldwin, Wayne E.
2017-01-01
Hurricane Sandy was one of the most destructive hurricanes in US history, making landfall on the New Jersey coast on Oct 30, 2012. Storm impacts included several barrier island breaches, massive coastal erosion, and flooding. While changes to the subaerial landscape are relatively easily observed, storm-induced changes to the adjacent shoreface and inner continental shelf are more difficult to evaluate. These regions provide a framework for the coastal zone, are important for navigation, aggregate resources, marine ecosystems, and coastal evolution. Here we provide unprecedented perspective regarding regional inner continental shelf sediment dynamics based on both observations and numerical modeling over time scales associated with these types of large storm events. Oceanographic conditions and seafloor morphologic changes are evaluated using both a coupled atmospheric-ocean-wave-sediment numerical modeling system and observation analysis from a series of geologic surveys and oceanographic instrument deployments focused on a region offshore of Fire Island, NY. The geologic investigations conducted in 2011 and 2014 revealed lateral movement of sedimentary structures of distances up to 450 m and in water depths up to 30 m, and vertical changes in sediment thickness greater than 1 m in some locations. The modeling investigations utilize a system with grid refinement designed to simulate oceanographic conditions with progressively increasing resolutions for the entire US East Coast (5-km grid), the New York Bight (700-m grid), and offshore of Fire Island, NY (100-m grid), allowing larger scale dynamics to drive smaller scale coastal changes. Model results in the New York Bight identify maximum storm surge of up to 3 m, surface currents on the order of 2 ms-1 along the New Jersey coast, waves up to 8 m in height, and bottom stresses exceeding 10 Pa. Flow down the Hudson Shelf Valley is shown to result in convergent sediment transport and deposition along its axis
9. Sediment Chemistry and Toxicity in Barnegat Bay, New Jersey: Pre- and Post- Hurricane Sandy, 2012-2013.
Science.gov (United States)
Romanok, Kristin M.; Szabo, Zoltan; Reilly, Timothy J.; Defne, Zafer; Ganju, Neil K.
2016-01-01
Hurricane Sandy made landfall in Barnegat Bay, October, 29, 2012, damaging shorelines and infrastructure. Estuarine sediment chemistry and toxicity were investigated before and after to evaluate potential environmental health impacts and to establish post-event baseline sediment-quality conditions. Trace element concentrations increased throughout Barnegat Bay up to two orders of magnitude, especially north of Barnegat Inlet, consistent with northward redistribution of silt. Loss of organic compounds, clay, and organic carbon is consistent with sediment winnowing and transport through the inlets and sediment transport modeling results. The number of sites exceeding sediment quality guidance levels for trace elements tripled post-Sandy. Sediment toxicity post-Sandy was mostly unaffected relative to pre-Sandy conditions, but at the site with the greatest relative increase for trace elements, survival rate of the test amphipod decreased (indicating degradation). This study would not have been possible without comprehensive baseline data enabling the evaluation of storm-derived changes in sediment quality.
10. A new method for measuring bioturbation rates in sandy tidal flat sediments based on luminescence dating
DEFF Research Database (Denmark)
Madsen, Anni T.; Murray, Andrew S.; Jain, Mayank
2011-01-01
The rates of post-depositional mixing by bioturbation have been investigated using Optically Stimulated Luminescence (OSL) dating in two sediment cores (BAL2 and BAL5), retrieved from a sandy tidal flat in the Danish part of the Wadden Sea. A high-resolution chronology, consisting of thirty-six OSL...
11. Enhanced benthic activity in sandy sublittoral sediments: Evidence from 13C tracer experiments
NARCIS (Netherlands)
Bühring, S.I.; Ehrenhauss, S.; Kamp, A.; Moodley, L.; Prof. Witte, U.
2006-01-01
In situ and on-board pulse-chase experiments were carried out on a sublittoral fine sand in the German Bight (southern North Sea) to investigate the hypothesis that sandy sediments are highly active and have fast turnover rates. To test this hypothesis, we conducted a series of experiments where we
12. Importance of phytodetritus and microphytobenthos for heterotrophs in a shallow subtidal sandy sediment
NARCIS (Netherlands)
Evrard, V.; Huettel, M.; Cook, P.L.M.; Soetaert, K.; Heip, C.H.R.; Middelburg, J.J.
2012-01-01
The relative importance of allochthonous phytodetritus deposition and autochthonous microphytobenthos (MPB) production for benthic consumers in an organic carbon (C-org)-poor sandy sediment was assessed using a C-13-stable isotope natural abundance study combined with a dual C-13-tracer addition
13. Carbon and nitrogen flows through the benthic food web of a photic subtidal sandy sediment
NARCIS (Netherlands)
Evrard, V.P.E.; Soetaert, K.E.R.; Heip, C.H.R.; Huettel, M.; Xenopoulos, M.A.; Middelburg, J.J.
2010-01-01
Carbon and nitrogen flows within the food web of a subtidal sandy sediment were studied using stable isotope natural abundances and tracer addition. Natural abundances of 13C and 15N stable isotopes of the consumers and their potential benthic and pelagic resources were measured. δ13C data revealed
14. Determination of diffusion coefficients in cohesive and sandy sediment from the area of Gorleben
International Nuclear Information System (INIS)
Klotz, D.
1989-01-01
The cohesive and sandy sediments stem from shaft driving at the Gorleben salt done. For the cohesive materials, HTD was used as a tracer substance, while I-131 - was used for the sandy materials. Diffusion coefficients of HTD in cohesive materials in their natural texture are in the range of 2x10 -6 to 5x10 -6 cm 2 /s, those of I-131 - in the investigated uniform fine and middle sands are approximately 3x10 -6 cm 2 /s. (DG) [de
15. Marine meiofauna, carbon and nitrogen mineralization in sandy and soft sediments of Disko Bay, West Greenland
DEFF Research Database (Denmark)
Rysgaard, S.; Christensen, P.B.; Sørensen, Martin Vinther
2000-01-01
Organic carbon mineralization was studied in a shallow-water (4 m), sandy sediment and 2 comparatively deep-water (150 and 300 m), soft sediments in Disko Bay, West Greenland. Benthic microalgae inhabiting the shallow-water locality significantly affected diurnal O-2 conditions within the surface...... is regulated primarily by the availability of organic matter and not by temperature. The shallow-water sediment contained a larger meiofauna population than the deep-water muddy sediments. Crustacean nauplia dominated the upper 9 mm while nematodes dominated below. A typical interstitial fauna of species...... layers of the sediment. Algal photosynthetic activity and nitrogen uptake reduced nitrogen effluxes and denitrification rates. Sulfate reduction was the most important pathway for carbon mineralization in the sediments of the shallow-water station. In contrast, high bottom-water NO3- concentrations...
16. What would happen to Superstorm Sandy under the influence of a substantially warmer Atlantic Ocean?
Science.gov (United States)
Lau, William K. M.; Shi, J. J.; Tao, W. K.; Kim, K. M.
2016-01-01
Based on ensemble numerical simulations, we find that possible responses of Sandy-like superstorms under the influence of a substantially warmer Atlantic Ocean bifurcate into two groups. In the first group, storms are similar to present-day Sandy from genesis to extratropical transition, except they are much stronger, with peak Power Destructive Index (PDI) increased by 50-80%, heavy rain by 30-50%, and maximum storm size (MSS) approximately doubled. In the second group, storms amplify substantially over the interior of the Atlantic warm pool, with peak PDI increased by 100-160%, heavy rain by 70-180%, and MSS more than tripled compared to present-day Superstorm Sandy. These storms when exiting the warm pool, recurve northeastward out to sea, subsequently interact with the developing midlatitude storm by mutual counterclockwise rotation around each other and eventually amplify into a severe Northeastern coastal storm, making landfall over the extreme northeastern regions from Maine to Nova Scotia.
17. Enhanced benthic activity in sandy sublittoral sediments: Evidence from 13C tracer experiments
DEFF Research Database (Denmark)
Bühring, Solveig I.; Ehrenhauss, Sandra; Kamp, Anja
2006-01-01
In situ and on-board pulse-chase experiments were carried out on a sublittoral fine sand in the German Bight (southern North Sea) to investigate the hypothesis that sandy sediments are highly active and have fast turnover rates. To test this hypothesis, we conducted a series of experiments where we...... investigated the pathway of settling particulate organic carbon through the benthic food web. The diatom Ditylum brightwellii was labelled with the stable carbon isotope 13C and injected into incubation chambers. On-board incubations lasted 12, 30 and 132 h, while the in situ experiment was incubated for 32 h....... The study revealed a stepwise short-term processing of a phytoplankton bloom settling on a sandy sediment. After the 12 h incubation, the largest fraction of recovered carbon was in the bacteria (62%), but after longer incubation times (30 and 32 h in situ) the macrofauna gained more importance (15 and 48...
18. Effects of deposition of heavy-metal-polluted harbor mud on microbial diversity and metal resistance in sandy marine sediments
DEFF Research Database (Denmark)
Toes, Ann-Charlotte M; Finke, Niko; Kuenen, J Gijs
2008-01-01
Deposition of dredged harbor sediments in relatively undisturbed ecosystems is often considered a viable option for confinement of pollutants and possible natural attenuation. This study investigated the effects of deposition of heavy-metal-polluted sludge on the microbial diversity of sandy...... the finding that some groups of clones were shared between the metal-impacted sandy sediment and the harbor control, comparative analyses showed that the two sediments were significantly different in community composition. Consequences of redeposition of metal-polluted sediment were primarily underlined...... with cultivation-dependent techniques. Toxicity tests showed that the percentage of Cd- and Cu-tolerant aerobic heterotrophs was highest among isolates from the sandy sediment with metal-polluted mud on top....
19. Study on small-strain behaviours of methane hydrate sandy sediments using discrete element method
Energy Technology Data Exchange (ETDEWEB)
Yu Yanxin; Cheng Yipik [Department of Civil, Environmental and Geomatic Engineering, University College London (UCL), Gower Street, London, WC1E 6BT (United Kingdom); Xu Xiaomin; Soga, Kenichi [Geotechnical and Environmental Research Group, Department of Engineering, University of Cambridge, Trumpington Street, Cambridge, CB2 1PZ (United Kingdom)
2013-06-18
Methane hydrate bearing soil has attracted increasing interest as a potential energy resource where methane gas can be extracted from dissociating hydrate-bearing sediments. Seismic testing techniques have been applied extensively and in various ways, to detect the presence of hydrates, due to the fact that hydrates increase the stiffness of hydrate-bearing sediments. With the recognition of the limitations of laboratory and field tests, wave propagation modelling using Discrete Element Method (DEM) was conducted in this study in order to provide some particle-scale insights on the hydrate-bearing sandy sediment models with pore-filling and cementation hydrate distributions. The relationship between shear wave velocity and hydrate saturation was established by both DEM simulations and analytical solutions. Obvious differences were observed in the dependence of wave velocity on hydrate saturation for these two cases. From the shear wave velocity measurement and particle-scale analysis, it was found that the small-strain mechanical properties of hydrate-bearing sandy sediments are governed by both the hydrate distribution patterns and hydrate saturation.
20. Climate control on late Holocene high-energy sedimentation along coasts of the northeastern Atlantic Ocean
OpenAIRE
Poirier , Clément; Tessier , Bernadette; Chaumillon , Eric
2017-01-01
International audience; Abundant sedimentological and geochronological data gathered on European sandy coasts highlight major phases of increased high-energy sedimentation in the North Atlantic Ocean during the late Holocene. Owing to an inconsistent use of the terminology, it is often difficult to determine whether studies have described storm-built or wave-built deposits. Both deposits can be identified by overall similar coarse-grained sedimentary facies, but may provide contradictory pale...
1. Ocean surface waves in Hurricane Ike (2008) and Superstorm Sandy (2012): Coupled model predictions and observations
Science.gov (United States)
Chen, Shuyi S.; Curcic, Milan
2016-07-01
Forecasting hurricane impacts of extreme winds and flooding requires accurate prediction of hurricane structure and storm-induced ocean surface waves days in advance. The waves are complex, especially near landfall when the hurricane winds and water depth varies significantly and the surface waves refract, shoal and dissipate. In this study, we examine the spatial structure, magnitude, and directional spectrum of hurricane-induced ocean waves using a high resolution, fully coupled atmosphere-wave-ocean model and observations. The coupled model predictions of ocean surface waves in Hurricane Ike (2008) over the Gulf of Mexico and Superstorm Sandy (2012) in the northeastern Atlantic and coastal region are evaluated with the NDBC buoy and satellite altimeter observations. Although there are characteristics that are general to ocean waves in both hurricanes as documented in previous studies, wave fields in Ike and Sandy possess unique properties due mostly to the distinct wind fields and coastal bathymetry in the two storms. Several processes are found to significantly modulate hurricane surface waves near landfall. First, the phase speed and group velocities decrease as the waves become shorter and steeper in shallow water, effectively increasing surface roughness and wind stress. Second, the bottom-induced refraction acts to turn the waves toward the coast, increasing the misalignment between the wind and waves. Third, as the hurricane translates over land, the left side of the storm center is characterized by offshore winds over very short fetch, which opposes incoming swell. Landfalling hurricanes produce broader wave spectra overall than that of the open ocean. The front-left quadrant is most complex, where the combination of windsea, swell propagating against the wind, increasing wind-wave stress, and interaction with the coastal topography requires a fully coupled model to meet these challenges in hurricane wave and surge prediction.
2. Methane accumulation and forming high saturations of methane hydrate in sandy sediments
Energy Technology Data Exchange (ETDEWEB)
Uchida, T.; Waseda, A. [JAPEX Research Center, Chiba (Japan); Fujii, T. [Japan Oil, Gas and Metals National Corp., Chiba (Japan). Upstream Technology Unit
2008-07-01
Methane supplies for marine gas hydrates are commonly attributed to the microbial conversion of organic materials. This study hypothesized that methane supplies were related to pore water flow behaviours and microscopic migration in intergranular pore systems. Sedimentology and geochemistry analyses were performed on sandy core samples taken from the Nankai trough and the Mallik gas hydrate test site in the Mackenzie Delta. The aim of the study was to determine the influence of geologic and sedimentolic controls on the formation and preservation of natural gas hydrates. Grain size distribution curves indicated that gas hydrate saturations of up to 80 per cent in pore volume occurred throughout the hydrate-dominant sand layers in the Nankai trough and Mallik areas. Water permeability measurements showed that the highly gas hydrate-saturated sands have a permeability of a few millidarcies. Pore-space gas hydrates occurred primarily in fine and medium-grained sands. Core temperature depression, core observations, and laboratory analyses of the hydrates confirmed the pore-spaces as intergranular pore fillings. Results of the study suggested that concentrations of gas hydrates may require a pore space large enough to occur within a host sediments, and that the distribution of porous and coarser-grained sandy sediments is an important factor in controlling the occurrence of gas hydrates. 11 refs., 4 figs.
3. Mobilization And Characterization Of Colloids Generated From Cement Leachates Moving Through A SRS Sandy Sediment
International Nuclear Information System (INIS)
Li, D.; Roberts, K.; Kaplan, D.; Seaman, J.
2011-01-01
Naturally occurring mobile colloids are ubiquitous and are involved in many important processes in the subsurface zone. For example, colloid generation and subsequent mobilization represent a possible mechanism for the transport of contaminants including radionuclides in the subsurface environments. For colloid-facilitated transport to be significant, three criteria must be met: (1) colloids must be generated; (2) contaminants must associate with the colloids preferentially to the immobile solid phase (aquifer); and (3) colloids must be transported through the groundwater or in subsurface environments - once these colloids start moving they become 'mobile colloids'. Although some experimental investigations of particle release in natural porous media have been conducted, the detailed mechanisms of release and re-deposition of colloidal particles within natural porous media are poorly understood. Even though this vector of transport is known, the extent of its importance is not known yet. Colloid-facilitated transport of trace radionuclides has been observed in the field, thus demonstrating a possible radiological risk associated with the colloids. The objective of this study was to determine if cementitious leachate would promote the in situ mobilization of natural colloidal particles from a SRS sandy sediment. The intent was to determine whether cementitious surface or subsurface structure would create plumes that could produce conditions conducive to sediment dispersion and mobile colloid generation. Column studies were conducted and the cation chemistries of influents and effluents were analyzed by ICP-OES, while the mobilized colloids were characterized using XRD, SEM, EDX, PSD and Zeta potential. The mobilization mechanisms of colloids in a SRS sandy sediment by cement leachates were studied.
4. Thorium content in bottom sediments of Pacific and Indian oceans
International Nuclear Information System (INIS)
Gurvich, E.G.; Lisitsyn, A.P.
1980-01-01
Presented are the results of 232 Th distribution study in different substance-genetic types of bottom sediments of Pacific and Indian oceans. Th content determination has been carried out by the method of instrumental neutron activation analysis. Th distribution maps in the surface layer of bottom sediments of Pacific and Indian oceans are drawn. It is noted that Indian ocean sediments are much richer with Th moreover Th distribution in different types of sediments is very non-uniform. Non-uniformity of Th distribution in different types of Pacific ocean sediments is considerably less than that of Indian ocean and exceeds it only in red oozes
5. Thermophysical properties of deep ocean sediments
International Nuclear Information System (INIS)
Hadley, G.R.; McVey, D.F.; Morin, R.
1980-01-01
Here we report measurements of the thermal conductivity and diffusivity of reconsolidated illite and smectite ocean sediments at a pore pressure of 600 bars and temperatures ranging from 25 to 420 0 C. The conductivity and diffusivity were found to be in the range of 0.8 to 1.0 W/m-K and 2.2 to 2.8 x 10 -7 m 2 /s, respectively. These data are consistent with a mixture model which predicts sediment thermal properties as a function of constituent properties and porosity. Comparison of pre- and post-test physical properties indicated a decrease in pore water content and an order of magnitude increase in shear strength and permeability
6. Impact of redox-stratification on the diversity and distribution of bacterial communities in sandy reef sediments in a microcosm
Institute of Scientific and Technical Information of China (English)
GAO Zheng; WANG Xin; Angelos K. HANNIDES; Francis J. SANSONE; WANG Guangyi
2011-01-01
Relationships between microbial communities and geochemical environments are important in marine microbial ecology and biogeochemistry.Although biogeochemical redox stratification has been well documented in marine sediments,its impact on microbial communities remains largely unknown.In this study,we applied denaturing gradient gel electrophoresis (DGGE) and clone library construction to investigate the diversity and stratification of bacterial communities in redox-stratified sandy reef sediments in a microcosm.A total of 88 Operational Taxonomic Units (OTU) were identified from 16S rRNA clone libraries constructed from sandy reef sediments in a laboratory microcosm.They were members of nine phyla and three candidate divisions,including Proteobacteria (Alpha-,Beta-,Gamma-,Delta-,and Epsilonproteobacteria),Actinobacteria,Acidobacteria,Bacteroidetes,Chloroflexi,Cyanobacteria,Firmicutes,Verrucomicrobia,Spirochaetes,and the candidate divisions WS3,SO31 and AO19.The vast majority of these phylotypes are related to clone sequences from other marine sediments,but OTUs of Epsilonproteobacteria and WS3 are reported for the first time from permeable marine sediments.Several other OTUs are potential new bacterial phylotypes because of their low similarity with reference sequences.Results from the 16S rRNA,gene clone sequence analyses suggested that bacterial communities exhibit clear stratification across large redox gradients in these sediments,with the highest diversity found in the anoxic layer (15-25 mm) and the least diversity in the suboxic layer (3-5 mm).Analysis of the nosZ,and amoA gene libraries also indicated the stratification of denitrifiers and nitrifiers,with their highest diversity being in the anoxic and oxic sediment layers,respectively.These results indicated that redox-stratification can affect the distribution of bacterial communities in sandy reef sediments.
7. Total Sediment Thickness of the World's Oceans & Marginal Seas
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — A digital total-sediment-thickness database for the world's oceans and marginal seas has been compiled by the NOAA National Geophysical Data Center (NGDC). The data...
8. Predicting the denitrification capacity of sandy aquifers from shorter-term incubation experiments and sediment properties
Directory of Open Access Journals (Sweden)
W. Eschenbach
2013-02-01
Full Text Available Knowledge about the spatial variability of denitrification rates and the lifetime of denitrification in nitrate-contaminated aquifers is crucial to predict the development of groundwater quality. Therefore, regression models were derived to estimate the measured cumulative denitrification of aquifer sediments after one year of incubation from initial denitrification rates and several sediment parameters, namely total sulphur, total organic carbon, extractable sulphate, extractable dissolved organic carbon, hot water soluble organic carbon and potassium permanganate labile organic carbon.
For this purpose, we incubated aquifer material from two sandy Pleistocene aquifers in Northern Germany under anaerobic conditions in the laboratory using the 15N tracer technique. The measured amount of denitrification ranged from 0.19 to 56.2 mg N kg−1 yr−1. The laboratory incubations exhibited high differences between non-sulphidic and sulphidic aquifer material in both aquifers with respect to all investigated sediment parameters. Denitrification rates and the estimated lifetime of denitrification were higher in the sulphidic samples. For these samples, the cumulative denitrification measured during one year of incubation (Dcum(365 exhibited distinct linear regressions with the stock of reduced compounds in the investigated aquifer samples. Dcum(365 was predictable from sediment variables within a range of uncertainty of 0.5 to 2 (calculated Dcum(365/measured Dcum(365 for aquifer material with a Dcum(365 > 20 mg N kg−1 yr−1. Predictions were poor for samples with lower Dcum(365, such as samples from the NO3 bearing groundwater zone, which includes the non-sulphidic samples, from the upper part of both aquifers where denitrification is not sufficient to
9. Benthic solute exchange and carbon mineralization in two shallow subtidal sandy sediments: Effect of advective pore-water exchange
DEFF Research Database (Denmark)
Cook, Perran L. M.; Wenzhofer, Frank; Glud, Ronnie N.
2007-01-01
within the range measured in the chambers. The contribution of advection to solute exchange was highly variable and dependent on sediment topography. Advective processes also had a pronounced influence on the in situ distribution of O-2 within the sediment, with characteristic two-dimensional patterns...... of O-2 distribution across ripples, and also deep subsurface O-2 pools, being observed. Mineralization pathways were predominantly aerobic when benthic mineralization rates were low and advective pore-water flow high as a result of well-developed sediment topography. By contrast, mineralization...... proceeded predominantly through sulfate reduction when benthic mineralization rates were high and advective pore-water flow low as a result of poorly developed topography. Previous studies of benthic mineralization in shallow sandy sediments have generally ignored these dynamics and, hence, have overlooked...
10. Global Ocean Sedimentation Patterns: Plate Tectonic History Versus Climate Change
Science.gov (United States)
Goswami, A.; Reynolds, E.; Olson, P.; Hinnov, L. A.; Gnanadesikan, A.
2014-12-01
Global sediment data (Whittaker et al., 2013) and carbonate content data (Archer, 1996) allows examination of ocean sedimentation evolution with respect to age of the underlying ocean crust (Müller et al., 2008). From these data, we construct time series of ocean sediment thickness and carbonate deposition rate for the Atlantic, Pacific, and Indian ocean basins for the past 120 Ma. These time series are unique to each basin and reflect an integrated response to plate tectonics and climate change. The goal is to parameterize ocean sedimentation tied to crustal age for paleoclimate studies. For each basin, total sediment thickness and carbonate deposition rate from 0.1 x 0.1 degree cells are binned according to basement crustal age; area-corrected moments (mean, variance, etc.) are calculated for each bin. Segmented linear fits identify trends in present-day carbonate deposition rates and changes in ocean sedimentation from 0 to 120 Ma. In the North and South Atlantic and Indian oceans, mean sediment thickness versus crustal age is well represented by three linear segments, with the slope of each segment increasing with increasing crustal age. However, the transition age between linear segments varies among the three basins. In contrast, mean sediment thickness in the North and South Pacific oceans are numerically smaller and well represented by two linear segments with slopes that decrease with increasing crustal age. These opposing trends are more consistent with the plate tectonic history of each basin being the controlling factor in sedimentation rates, rather than climate change. Unlike total sediment thickness, carbonate deposition rates decrease smoothly with crustal age in all basins, with the primary controls being ocean chemistry and water column depth.References: Archer, D., 1996, Global Biogeochem. Cycles 10, 159-174.Müller, R.D., et al., 2008, Science, 319, 1357-1362.Whittaker, J., et al., 2013, Geochem., Geophys., Geosyst. DOI: 10.1002/ggge.20181
11. The role of sediment compaction and groundwater withdrawal in local sea-level rise, Sandy Hook, New Jersey, USA
Science.gov (United States)
Johnson, Christopher S.; Miller, Kenneth G.; Browning, James V.; Kopp, Robert E.; Khan, Nicole S.; Fan, Ying; Stanford, Scott D.; Horton, Benjamin P.
2018-02-01
The rate of relative sea-level (RSL) rise at Sandy Hook, NJ (4.0 ± 0.5 mm/yr) was higher than The Battery, NY (3.0 ± 0.3 mm/yr) from 1900 to 2012 despite being separated by just 26 km. The difference cannot be explained by differential glacial isostatic adjustment (GIA; 1.4 ± 0.4 and 1.3 ± 0.4 mm/yr RSL rise, respectively) alone. We estimate the contribution of sediment compaction to subsidence at Sandy Hook using high-resolution grain size, percent organic matter, and porosity data from three upper Quaternary (≤13,350 cal yr) cores. The organic matter content (indicates that compaction of deglacial silts likely reduced the column thickness by 10-20% over the past 13,350 cal yrs. While compaction rates were high immediately after the main silt deposition (13,350-13,150 cal yrs BP), rates decreased exponentially after deposition to an average 20th century rate of 0.16 mm/yr (90% Confidence Interval (C.I.), 0.06-0.32 mm/yr). The remaining ∼0.7 mm/yr (90% C.I. 0.3-1.2 mm/yr) difference in subsidence between Sandy Hook and The Battery is likely due to anthropogenic groundwater withdrawal. Historical data from Fort Hancock wells (2 km to the southeast of the Sandy Hook tide gauge) and previous regional work show that local and regional water extraction lowered the water levels in the aquifers underlying Sandy Hook. We suggest that the modern order of contribution to subsidence (highest to lowest) appears to be GIA, local/regional groundwater extraction, and compaction of thick Quaternary silts.
12. Temporal dynamics of gastropod fauna on subtidal sandy sediments of the Ensenada de Baiona (NW Iberian Peninsula)
Science.gov (United States)
Moreira, J.; Aldea, C.; Troncoso, J. S.
2010-12-01
The temporal variation of the gastropod fauna inhabiting sandy sediments of the Ensenada de Baiona (Galicia, Spain) was studied at three subtidal sites from February 1996 to February 1997 by means of quantitative sampling. A total of 5,463 individuals representing 51 gastropod species and 22 families were found. The family Pyramidellidae was the most diverse in number of species (11 species), followed by Rissoidae and Trochidae (4 species each). The dogwhelk, Nassarius reticulatus, and the rissoid snail, Rissoa parva, were the numerically dominant species at the three studied sites; those and other abundant species showed their greatest densities by the end of summer and the beginning of autumn. In general, univariate measures of the assemblage (number of species, abundance, diversity and evenness) showed variations through time; greater values were recorded between summer and autumn depending on the site. Multivariate analyses done on abundance data showed certain seasonality in the evolution of the assemblage as expected for shallow subtidal sandy sediments at temperate latitudes; those seasonal changes were mostly related to variations in abundance of numerically dominant species. Although the measured sedimentary variables did not show significant correlations with faunal univariate parameters, sediment heterogeneity due to the presence of mats of Zostera marina L. and shells of dead bivalves might explain the differences in composition of the gastropod assemblage among sampling sites.
13. Predicting Sediment Thickness on Vanished Ocean Crust Since 200 Ma
Science.gov (United States)
Dutkiewicz, A.; Müller, R. D.; Wang, X.; O'Callaghan, S.; Cannon, J.; Wright, N. M.
2017-12-01
Tracing sedimentation through time on existing and vanished seafloor is imperative for constraining long-term eustasy and for calculating volumes of subducted deep-sea sediments that contribute to global geochemical cycles. We present regression algorithms that incorporate the age of the ocean crust and the mean distance to the nearest passive margin to predict sediment thicknesses and long-term decompacted sedimentation rates since 200 Ma. The mean sediment thickness decreases from ˜220 m at 200 Ma to a minimum of ˜140 m at 130 Ma, reflecting the replacement of old Panthalassic ocean floor with young sediment-poor mid-ocean ridges, followed by an increase to ˜365 m at present-day. This increase reflects the accumulation of sediments on ageing abyssal plains proximal to passive margins, coupled with a decrease in the mean distance of any parcel of ocean crust to the nearest passive margin by over 700 km, and a doubling of the total passive margin length at present-day. Mean long-term sedimentation rates increase from ˜0.5 cm/ky at 160 Ma to over 0.8 cm/ky today, caused by enhanced terrigenous sediment influx along lengthened passive margins, superimposed by the onset of ocean-wide carbonate sedimentation. Our predictive algorithms, coupled to a plate tectonic model, provide a framework for constraining the seafloor sediment-driven eustatic sea-level component, which has grown from ˜80 to 210 m since 120 Ma. This implies a long-term sea-level rise component of 130 m, partly counteracting the contemporaneous increase in ocean basin depth due to progressive crustal ageing.
14. Chemical and ancillary data associated with bed sediment, young of year Bluefish (Pomatomus saltatrix) tissue, and mussel (Mytilus edulis and Geukensia demissa) tissue collected after Hurricane Sandy in bays and estuaries of New Jersey and New York, 2013–14
Science.gov (United States)
Smalling, Kelly L.; Deshpande, Ashok D.; Blazer, Vicki; Galbraith, Heather S.; Dockum, Bruce W.; Romanok, Kristin M.; Colella, Kaitlyn; Deetz, Anna C.; Fisher, Irene J.; Imbrigiotta, Thomas E.; Sharack, Beth; Summer, Lisa; Timmons, DeMond; Trainor, John J.; Wieczorek, Daniel; Samson, Jennifer; Reilly, Timothy J.; Focazio, Michael J.
2015-09-09
This report describes the methods and data associated with a reconnaissance study of young of year bluefish and mussel tissue samples as well as bed sediment collected as bluefish habitat indicators during August 2013–April 2014 in New Jersey and New York following Hurricane Sandy in October 2012. This study was funded by the Disaster Relief Appropriations Act of 2013 (PL 113-2) and was conducted by the U.S. Geological Survey (USGS) in cooperation with the National Oceanic and Atmospheric Administration (NOAA).
15. Estuarine bed-sediment-quality data collected in New Jersey and New York after Hurricane Sandy, 2013
Science.gov (United States)
Fischer, Jeffrey M.; Phillips, Patrick J.; Reilly, Timothy J.; Focazio, Michael J.; Loftin, Keith A.; Benzel, William M.; Jones, Daniel K.; Smalling, Kelly L.; Fisher, Shawn C.; Fisher, Irene J.; Iwanowicz, Luke R.; Romanok, Kristin M.; Jenkins, Darkus E.; Bowers, Luke; Boehlke, Adam; Foreman, William T.; Deetz, Anna C.; Carper, Lisa G.; Imbrigiotta, Thomas E.; Birdwell, Justin E.
2015-01-01
This report describes a reconnaissance study of estuarine bed-sediment quality conducted June–October 2013 in New Jersey and New York after Hurricane Sandy in October 2012 to assess the extent of contamination and the potential long-term human and ecological impacts of the storm. The study, funded through the Disaster Relief Appropriations Act of 2013 (PL 113-2), was conducted by the U.S. Geological Survey in cooperation with the U.S. Environmental Protection Agency and the National Oceanographic and Atmospheric Administration. In addition to presenting the bed-sediment-quality data, the report describes the study design, documents the methods of sample collection and analysis, and discusses the steps taken to assure the quality of the data.
16. Regional variability in bed-sediment concentrations of wastewater compounds, hormones and PAHs for portions of coastal New York and New Jersey impacted by hurricane Sandy
Science.gov (United States)
Phillips, Patrick J.; Gibson, Cathy A; Fisher, Shawn C.; Fisher, Irene; Reilly, Timothy J.; Smalling, Kelly L.; Romanok, Kristin M.; Foreman, William T.; ReVello, Rhiannon C.; Focazio, Michael J.; Jones, Daniel K.
2016-01-01
Bed sediment samples from 79 coastal New York and New Jersey, USA sites were analyzed for 75 compounds including wastewater associated contaminants, PAHs, and other organic compounds to assess the post-Hurricane Sandy distribution of organic contaminants among six regions. These results provide the first assessment of wastewater compounds, hormones, and PAHs in bed sediment for this region. Concentrations of most wastewater contaminants and PAHs were highest in the most developed region (Upper Harbor/Newark Bay, UHNB) and reflected the wastewater inputs to this area. Although the lack of pre-Hurricane Sandy data for most of these compounds make it impossible to assess the effect of the storm on wastewater contaminant concentrations, PAH concentrations in the UHNB region reflect pre-Hurricane Sandy conditions in this region. Lower hormone concentrations than predicted by the total organic carbon relation occurred in UHNB samples, suggesting that hormones are being degraded in the UHNB region.
17. Effect of nutrient availability on carbon and nitrogen incorporation and flows through benthic algae and bacteria in near-shore sandy sediment
NARCIS (Netherlands)
Cook, P.; Veuger, B.; Böer, S.; Middelburg, J.J.
2007-01-01
Carbon and nitrogen uptake in a microbial community comprising bacteria and microalgae in a sandy marine sediment under nutrient-limited and -replete conditions was studied using a mesocosm approach. After 2 wk of incubation, a pulse of H13CO3– and 15NH4+ was added to the mesocosms, and subsequent
18. Flood risk analysis for flood control and sediment transportation in sandy regions: A case study in the Loess Plateau, China
Science.gov (United States)
Guo, Aijun; Chang, Jianxia; Wang, Yimin; Huang, Qiang; Zhou, Shuai
2018-05-01
Traditional flood risk analysis focuses on the probability of flood events exceeding the design flood of downstream hydraulic structures while neglecting the influence of sedimentation in river channels on regional flood control systems. This work advances traditional flood risk analysis by proposing a univariate and copula-based bivariate hydrological risk framework which incorporates both flood control and sediment transport. In developing the framework, the conditional probabilities of different flood events under various extreme precipitation scenarios are estimated by exploiting the copula-based model. Moreover, a Monte Carlo-based algorithm is designed to quantify the sampling uncertainty associated with univariate and bivariate hydrological risk analyses. Two catchments located on the Loess plateau are selected as study regions: the upper catchments of the Xianyang and Huaxian stations (denoted as UCX and UCH, respectively). The univariate and bivariate return periods, risk and reliability in the context of uncertainty for the purposes of flood control and sediment transport are assessed for the study regions. The results indicate that sedimentation triggers higher risks of damaging the safety of local flood control systems compared with the event that AMF exceeds the design flood of downstream hydraulic structures in the UCX and UCH. Moreover, there is considerable sampling uncertainty affecting the univariate and bivariate hydrologic risk evaluation, which greatly challenges measures of future flood mitigation. In addition, results also confirm that the developed framework can estimate conditional probabilities associated with different flood events under various extreme precipitation scenarios aiming for flood control and sediment transport. The proposed hydrological risk framework offers a promising technical reference for flood risk analysis in sandy regions worldwide.
19. Volcanogenic sediments in the Indian Ocean
Digital Repository Service at National Institute of Oceanography (India)
Pattan, J.N.
for productivity. On the other hand, elements like Fe, Ti, Mg, Mn, Cu and Ni are all diluted by glass shards because these elements content is lower than the associated sediment. Therefore one has to careful while interpreting bulk chemical data without...
20. Light Penetration and Light-Intensity in Sandy Marine-Sediments Measured with Irradiance and Scalar Irradiance Fiberoptic Microprobes Rid A-1977-2009
DEFF Research Database (Denmark)
KUHL, M.; LASSEN, C.; JØRGENSEN, BB
1994-01-01
Fiber-optic microprobes for determining irradiance and scalar irradiance were used for light measurements in sandy sediments of different particle size. Intense scattering caused a maximum integral light intensity [photon scalar irradiance, E0(400 to 700 rim) and E0(700 to 880 nm)] at the sediment...... diffuse. Our results demonstrate the importance of measuring scalar irradiance when the role of light in photobiological processes in sediments, e.g. microbenthic photosynthesis, is investigated....... surface ranging from 180 % of incident collimated light in the coarsest sediment (250 to 500 mum grain size) up to 280 % in the finest sediment ( 1 mm in the coarsest sediments. Below 1 mm, light was attenuated exponentially with depth in all sediments. Light attenuation coefficients decreased...
1. Wave-induced coherent turbulence structures and sediment resuspension in the nearshore of a prototype-scale sandy barrier beach
Science.gov (United States)
Kassem, Hachem; Thompson, Charlotte E. L.; Amos, Carl L.; Townend, Ian H.
2015-10-01
The suspension of sediments by oscillatory flows is a complex case of fluid-particle interaction. The aim of this study is to provide insight into the spatial (time) and scale (frequency) relationships between wave-generated boundary layer turbulence and event-driven sediment transport beneath irregular shoaling and breaking waves in the nearshore of a prototype sandy barrier beach, using data collected through the Barrier Dynamics Experiment II (BARDEX II). Statistical, quadrant and spectral analyses reveal the anisotropic and intermittent nature of Reynolds' stresses (momentum exchange) in the wave boundary layer, in all three orthogonal planes of motion. The fractional contribution of coherent turbulence structures appears to be dictated by the structural form of eddies beneath plunging and spilling breakers, which in turn define the net sediment mobilisation towards or away from the barrier, and hence ensuing erosion and accretion trends. A standing transverse wave is also observed in the flume, contributing to the substantial skewness of spanwise turbulence. Observed low frequency suspensions are closely linked to the mean flow (wave) properties. Wavelet analysis reveals that the entrainment and maintenance of sediment in suspension through a cluster of bursting sequence is associated with the passage of intermittent slowly-evolving large structures, which can modulate the frequency of smaller motions. Outside the boundary layer, small scale, higher frequency turbulence drives the suspension. The extent to which these spatially varied perturbation clusters persist is associated with suspension events in the high frequency scales, decaying as the turbulent motion ceases to supply momentum, with an observed hysteresis effect.
2. Effects of bioremediation agents on oil degradation in mineral and sandy salt marsh sediments
International Nuclear Information System (INIS)
Lin, Q.; Mendelssohn, I.A.; Henry, C.B. Jr.; Roberts, P.O.; Walsh, M.M.; Overton, E.B.; Portier, R.J.
1999-01-01
Although bioremediation for oil spill cleanup has received considerable attention in recent years, its satisfactory use in the cleanup of oil spills in the wetland environment is still generally untested. A study of the often most used bioremediation agents, fertiliser, microbial product and soil oxidation, as a means of enhancing oil biodegradation in coastal mineral and sandy marsh substrates was conducted in controlled greenhouse conditions. Artificially weathered south Louisiana crude oil was applied to sods of marsh (soil and intact vegetation) at the rate of 2 l m -2 . Fertiliser application enhanced marsh plant growth, soil microbial populations, and oil biodegradation rate. The live aboveground biomass of Spartina alterniflora with fertiliser application was higher than that without fertiliser. The application of fertiliser significantly increased soil microbial respiration rates, indicating the potential for enhancing oil biodegradation. Bioremediation with fertiliser application significantly reduced the total targeted normal hydrocarbons (TTNH) and total targeted aromatic hydrocarbons (TTAH) remaining in the soil, by 81% and 17%, respectively, compared to those of the oil controls. TTNH/hopane and TTAAH/hopane ratios showed a more consistent reduction, further suggesting an enhancement of oil biodegradation by fertilisation. Furthermore, soil type affected oil bioremediation; the extent of fertiliser-enhanced oil biodegradation was greater for sandy (13% TTNH remaining in the treatments with fertiliser compared to the control) than for mineral soils (26% of the control), suggesting that fertiliser application was more effective in enhancing TTNH degradation in the former. Application of microbial product and soil oxidant had no positive effects on the variables mentioned above under the present experimental conditions, suggesting that microbial degraders are not limiting biodegradation in this soil. Thus, the high cost of microbial amendments during
3. Ion migration in ocean sediments: subseafloor radioactive waste disposal
International Nuclear Information System (INIS)
Nuttall, H.E.; Ray, A.K.; Davis, E.J.
1980-01-01
In this study of seabed disposal, analytical ion transport models were developed and used to elucidate ion migration through ocean sediments and to study the escape of ions from the ocean floor into the water column. An unsteady state isothermal diffusion model was developed for the region far from the canister to examine the effects of ion diffusion, adsorption, radioactive decay, sediment thickness and canister position. Analytical solutions were derived to represent the transient concentration profiles within the sediment, ion flux and the ion discharge rate to the water column for two types of initial conditions: instantaneous dissolution of the canister and constant canister leakage. Generalized graphs showing ion migration and behavior are presented
4. Widespread Anthropogenic Nitrogen in Northwestern Pacific Ocean Sediment.
Science.gov (United States)
Kim, Haryun; Lee, Kitack; Lim, Dhong-Il; Nam, Seung-Il; Kim, Tae-Wook; Yang, Jin-Yu T; Ko, Young Ho; Shin, Kyung-Hoon; Lee, Eunil
2017-06-06
Sediment samples from the East China and Yellow seas collected adjacent to continental China were found to have lower δ 15 N values (expressed as δ 15 N = [ 15 N: 14 N sample / 15 N: 14 N air - 1] × 1000‰; the sediment 15 N: 14 N ratio relative to the air nitrogen 15 N: 14 N ratio). In contrast, the Arctic sediments from the Chukchi Sea, the sampling region furthest from China, showed higher δ 15 N values (2-3‰ higher than those representing the East China and the Yellow sea sediments). Across the sites sampled, the levels of sediment δ 15 N increased with increasing distance from China, which is broadly consistent with the decreasing influence of anthropogenic nitrogen (N ANTH ) resulting from fossil fuel combustion and fertilizer use. We concluded that, of several processes, the input of N ANTH appears to be emerging as a new driver of change in the sediment δ 15 N value in marginal seas adjacent to China. The present results indicate that the effect of N ANTH has extended beyond the ocean water column into the deep sedimentary environment, presumably via biological assimilation of N ANTH followed by deposition. Further, the findings indicate that N ANTH is taking over from the conventional paradigm of nitrate flux from nitrate-rich deep water as the primary driver of biological export production in this region of the Pacific Ocean.
5. Impacts of ocean acidification on sediment processes in shallow waters of the Arctic Ocean.
Science.gov (United States)
Gazeau, Frédéric; van Rijswijk, Pieter; Pozzato, Lara; Middelburg, Jack J
2014-01-01
Despite the important roles of shallow-water sediments in global biogeochemical cycling, the effects of ocean acidification on sedimentary processes have received relatively little attention. As high-latitude cold waters can absorb more CO2 and usually have a lower buffering capacity than warmer waters, acidification rates in these areas are faster than those in sub-tropical regions. The present study investigates the effects of ocean acidification on sediment composition, processes and sediment-water fluxes in an Arctic coastal system. Undisturbed sediment cores, exempt of large dwelling organisms, were collected, incubated for a period of 14 days, and subject to a gradient of pCO2 covering the range of values projected for the end of the century. On five occasions during the experimental period, the sediment cores were isolated for flux measurements (oxygen, alkalinity, dissolved inorganic carbon, ammonium, nitrate, nitrite, phosphate and silicate). At the end of the experimental period, denitrification rates were measured and sediment samples were taken at several depth intervals for solid-phase analyses. Most of the parameters and processes (i.e. mineralization, denitrification) investigated showed no relationship with the overlying seawater pH, suggesting that ocean acidification will have limited impacts on the microbial activity and associated sediment-water fluxes on Arctic shelves, in the absence of active bio-irrigating organisms. Only following a pH decrease of 1 pH unit, not foreseen in the coming 300 years, significant enhancements of calcium carbonate dissolution and anammox rates were observed. Longer-term experiments on different sediment types are still required to confirm the limited impact of ocean acidification on shallow Arctic sediment processes as observed in this study.
6. Modeling and measuring the relationships between sediment transport processes, alluvial bedforms and channel-scale morphodynamics in sandy braided rivers.
Science.gov (United States)
Nicholas, A. P.; Ashworth, P. J.; Best, J.; Lane, S. N.; Parsons, D. R.; Sambrook Smith, G.; Simpson, C.; Strick, R. J. P.; Unsworth, C. A.
2017-12-01
Recent years have seen significant advances in the development and application of morphodynamic models to simulate river evolution. Despite this progress, significant challenges remain to be overcome before such models can provide realistic simulations of river response to environmental change, or be used to determine the controls on alluvial channel patterns and deposits with confidence. This impasse reflects a wide range of factors, not least the fact that many of the processes that control river behaviour operate at spatial scales that cannot be resolved by such models. For example, sand-bed rivers are characterised by multiple scales of topography (e.g., dunes, bars, channels), the finest of which must often by parameterized, rather than represented explicitly in morphodynamic models. We examine these issues using a combination of numerical modeling and field observations. High-resolution aerial imagery and Digital Elevation Models obtained for the sandy braided South Saskatchewan River in Canada are used to quantify dune, bar and channel morphology and their response to changing flow discharge. Numerical simulations are carried out using an existing morphodynamic model based on the 2D shallow water equations, coupled with new parameterisations of the evolution and influence of alluvial bedforms. We quantify the spatial patterns of sediment flux using repeat images of dune migration and bar evolution. These data are used to evaluate model predictions of sediment transport and morphological change, and to assess the degree to which model performance is controlled by the parametrization of roughness and sediment transport phenomena linked to subgrid-scale bedforms (dunes). The capacity of such models to replicate the characteristic multi-scale morphology of bars in sand-bed rivers, and the contrasting morphodynamic signatures of braiding during low and high flow conditions, is also assessed.
7. Production of fluorescent dissolved organic matter in Arctic Ocean sediments
Science.gov (United States)
Chen, Meilian; Kim, Ji-Hoon; Nam, Seung-Il; Niessen, Frank; Hong, Wei-Li; Kang, Moo-Hee; Hur, Jin
2016-12-01
Little is known about the production of fluorescent dissolved organic matter (FDOM) in the anoxic oceanic sediments. In this study, sediment pore waters were sampled from four different sites in the Chukchi-East Siberian Seas area to examine the bulk dissolved organic carbon (DOC) and their optical properties. The production of FDOM, coupled with the increase of nutrients, was observed above the sulfate-methane-transition-zone (SMTZ). The presence of FDOM was concurrent with sulfate reduction and increased alkalinity (R2 > 0.96, p 0.95, p CDOM and FDOM to the overlying water column, unearthing a channel of generally bio-refractory and pre-aged DOM to the oceans.
8. Tracing contaminant pathways in sandy heterogeneous glaciofluvial sediments using a sedimentary depositional model
International Nuclear Information System (INIS)
Webb, E.K.; Anderson, M.P.
1990-01-01
Heterogeneous sedimentary deposits present complications for tracking contaminant movement by causing a complex advective flow field. Connected areas of high conductivity produce so-called fast paths that control movement of solutes. Identifying potential fast paths and describing the variation in hydraulic properties was attempted through simulating the deposition of a glaciofluvial deposit (outwash). Glaciofluvial deposits usually consist of several depositional facies, each of which has different physical characteristics, depositional structures and hydraulic properties. Therefore, it is unlikely that the property of stationarity (a constant mean hydraulic conductivity and a mono-modal probability distribution) holds for an entire glaciofluvial sequence. However, the process of dividing an outwash sequence into geologic facies presumably identifies units of material with similar physical characteristics. It is proposed that patterns of geologic facies determined by field observation can be quantified by mathematical simulation of sediment deposition. Subsequently, the simulated sediment distributions can be used to define the distribution of hydrogeologic parameters and locate possible fast paths. To test this hypothesis, a hypothetical glacial outwash deposit based on geologic facies descriptions contained in the literature was simulated using a sedimentary depositional model, SEDSIM, to produce a three-dimensional description of sediment grain size distributions. Grain size distributions were then used to estimate the spatial distribution of hydraulic conductivity. Subsequently a finite-difference flow model and linked particle tracking algorithm were used to trace conservative transport pathways. This represents a first step in describing the spatial heterogeneity of hydrogeologic characteristics for glaciofluvial and other braided stream environments. (Author) (39 refs., 7 figs.)
9. Finite element analysis of thermal convection in deep ocean sediments
International Nuclear Information System (INIS)
Gartling, D.K.
1980-01-01
Of obvious importance to the study and engineering of a seabed disposal is the determination of the temperature and fluid flow fields existing in the sediment layer and the perturbation of these fields due to the implantation of localized heat sources. The fluid mechanical and heat transfer process occurring in oceanic sediments may be characterized as free (or natural) convection in a porous material. In the case of an undisturbed sediment layer, the driving force for the natural circulation of pore water comes from the geothermal heat flux. Current theories for heat flow from the sea floor suggest the possibility of large scale hydrothermal circulation in the oceanic crust (see e.g., Ribando, et al. 1976) which is in turn coupled with a convection process in the overlying sediment layer (Anderson 1980, Anderson, et al. 1979). The introduction of a local heat source, such as a waste canister, into a saturated sediment layer would by itself initiate a convection process due to buoyancy forces. Since the mathematical description of natural convection in a porous medium is of sufficient complexity to preclude the use of most analytic methods of analysis, approximate numerical procedures are often employed. In the following sections, a particular type of numerical method is described that has proved useful in the solution of a variety of porous flow problems. However, rather than concentrate on the details of the numerical algorithm the main emphasis of the presentation will be on the types of problems and results that are encountered in the areas of oceanic heat flow and seabed waste disposal
10. Enrichment of Geobacter species in response to stimulation of Fe(III) reduction in sandy aquifer sediments
Science.gov (United States)
Snoeyenbos-West, O.L.; Nevin, K.P.; Anderson, R.T.; Lovely, D.R.
2000-01-01
Engineered stimulation of Fe(III) has been proposed as a strategy to enhance the immobilization of radioactive and toxic metals in metal-contaminated subsurface environments. Therefore, laboratory and field studies were conducted to determine which microbial populations would respond to stimulation of Fe(III) reduction in the sediments of sandy aquifers. In laboratory studies, the addition of either various organic electron donors or electron shuttle compounds stimulated Fe(III) reduction and resulted in Geobacter sequences becoming important constituents of the Bacterial 16S rDNA sequences that could be detected with PCR amplification and denaturing gradient gel electrophoresis (DGGE). Quantification of Geobacteraceae sequences with a PCR most-probable-number technique indicated that the extent to which numbers of Geobacter increased was related to the degree of stimulation of Fe(III) reduction. Geothrix species were also enriched in some instances, but were orders of magnitude less numerous than Geobacter species. Shewanella species were not detected, even when organic compounds known to be electron donors for Shewanella species were used to stimulate Fe(III) reduction in the sediments. Geobacter species were also enriched in two field experiments in which Fe(III) reduction was stimulated with the addition of benzoate or aromatic hydrocarbons. The apparent growth of Geobacter species concurrent with increased Fe(III) reduction suggests that Geobacter species were responsible for much of the Fe(III) reduction in all of the stimulation approaches evaluated in three geographically distinct aquifers. Therefore, strategies for subsurface remediation that involve enhancing the activity of indigenous Fe(III)-reducing populations in aquifers should consider the physiological properties of Geobacter species in their treatment design.
11. Impacts of ocean acidification on sediment processes in shallow waters of the Arctic Ocean
NARCIS (Netherlands)
Gazeau, F.; van Rijswijk, P.; Pozzato, L.; Middelburg, J.J.
Despite the important roles of shallow-water sediments in global biogeochemical cycling, the effects of ocean acidification on sedimentary processes have received relatively little attention. As high-latitude cold waters can absorb more CO2 and usually have a lower buffering capacity than warmer
12. Impacts of Ocean Acidification on Sediment Processes in Shallow Waters of the Arctic Ocean
NARCIS (Netherlands)
Gazeau, F.; van Rijswijk, P.; Pozzato, L.; Middelburg, J.J.
2014-01-01
Despite the important roles of shallow-water sediments in global biogeochemical cycling, the effects of ocean acidification on sedimentary processes have received relatively little attention. As high-latitude cold waters can absorb more CO2 and usually have a lower buffering capacity than warmer
13. Sediment and discharge yields within a minimally disturbed, headwater watershed in North Central Pennsylvania, USA, with an emphasis on Superstorm Sandy
Science.gov (United States)
Maloney, Kelly O.; Shull, Dustin R.
2015-01-01
We estimated discharge and suspended sediment (SS) yield in a minimally disturbed watershed in North Central Pennsylvania, USA, and compared a typical storm (September storm, 4.80 cm) to a large storm (Superstorm Sandy, 7.47 cm rainfall). Depending on branch, Sandy contributed 9.7–19.9 times more discharge and 11.5–37.4 times more SS than the September storm. During the September storm, the upper two branches accounted for 60.6% of discharge and 88.8% of SS at Lower Branch; during Sandy these percentages dropped to 36.1% for discharge and 30.1% for SS. The branch with close proximity roads had over two-three times per area SS yield than the branch without such roads. Hysteresis loops showed typical clockwise patterns for the September storm and more complicated patterns for Sandy, reflecting the multipeak event. Estimates of SS and hysteresis in minimally disturbed watersheds provide useful information that can be compared spatially and temporally to facilitate management.
14. Hurricane Sandy Poster (October 29, 2012)
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — Hurricane Sandy poster. Multi-spectral image from Suomi-NPP shows Hurricane Sandy approaching the New Jersey Coast on October 29, 2012. Poster size is approximately...
15. Marine Microbial Gene Abundance and Community Composition in Response to Ocean Acidification and Elevated Temperature in Two Contrasting Coastal Marine Sediments
Directory of Open Access Journals (Sweden)
Ashleigh R. Currie
2017-08-01
Full Text Available Marine ecosystems are exposed to a range of human-induced climate stressors, in particular changing carbonate chemistry and elevated sea surface temperatures as a consequence of climate change. More research effort is needed to reduce uncertainties about the effects of global-scale warming and acidification for benthic microbial communities, which drive sedimentary biogeochemical cycles. In this research, mesocosm experiments were set up using muddy and sandy coastal sediments to investigate the independent and interactive effects of elevated carbon dioxide concentrations (750 ppm CO2 and elevated temperature (ambient +4°C on the abundance of taxonomic and functional microbial genes. Specific quantitative PCR primers were used to target archaeal, bacterial, and cyanobacterial/chloroplast 16S rRNA in both sediment types. Nitrogen cycling genes archaeal and bacterial ammonia monooxygenase (amoA and bacterial nitrite reductase (nirS were specifically targeted to identify changes in microbial gene abundance and potential impacts on nitrogen cycling. In muddy sediment, microbial gene abundance, including amoA and nirS genes, increased under elevated temperature and reduced under elevated CO2 after 28 days, accompanied by shifts in community composition. In contrast, the combined stressor treatment showed a non-additive effect with lower microbial gene abundance throughout the experiment. The response of microbial communities in the sandy sediment was less pronounced, with the most noticeable response seen in the archaeal gene abundances in response to environmental stressors over time. 16S rRNA genes (amoA and nirS were lower in abundance in the combined stressor treatments in sandy sediments. Our results indicated that marine benthic microorganisms, especially in muddy sediments, are susceptible to changes in ocean carbonate chemistry and seawater temperature, which ultimately may have an impact upon key benthic biogeochemical cycles.
16. Bacterial Diversity in Deep-Sea Sediments from Afanasy Nikitin Seamount, Equatorial Indian Ocean
Digital Repository Service at National Institute of Oceanography (India)
Khandeparker, R.; Meena, R.M.; Deobagkar, D.D.
Deep-sea sediments can reveal much about the last 200 million years of Earth history, including the history of ocean life and climate. Microbial diversity in Afanasy Nikitin seamount located at Equatorial East Indian Ocean (EEIO) was investigated...
17. Total Sediment Thickness of the World's Oceans & Marginal Seas, Version 2
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — NGDC's global ocean sediment thickness grid (Divins, 2003) has been updated for the Australian-Antarctic region (60?? -155?? E, 30?? -70?? S). New seismic reflection...
18. Archive of Geosample Information from the British Ocean Sediment Core Research Facility (BOSCORF)
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — The British Ocean Sediment Core Research Facility (BOSCORF), National Oceanography Centre, is a contributor to the Index to Marine and Lacustrine Geological Samples...
19. Dynamic hole closure behind a deep ocean sediment penetrator
International Nuclear Information System (INIS)
Dzwilewski, P.T.; Karnes, C.H.
1982-01-01
A freefall or boosted penetrator is one concept being considered to dispose of nuclear waste in the deep ocean seabed. For this technique to be acceptable, the sediment must be an effective barrier to the migration of radioactive nuclides, which means that the hole behind the advancing penetrator must close. One mechanism which can cause the hole to close immediately behind the penetrator is the reduction in water pressure in the wake as water tries to follow the penetrator into the sediment. An approximate solution to this complex problem is presented which analyzes the deformation of the sediment with a nonlinear, large displacement and strain, Lagrangian finite-difference computer code (STEALTH). The water was treated by Bernoulli's Principle for flow in a pipe resulting in a pressure boundary condition applied to the sediment surface along the path after passage of the penetrator. Two one-dimensional and eight two-dimensional calculations were performed with various penetrator velocities (15, 30, and 60 m/s) and sediment shear strengths. In two of the calculations, the dynamic pressure reduction was neglected to see if geostatic stresses alone would close the hole. The results of this study showed that geostatic stresses alone would not close the hole but the dynamic pressure reduction would. The largest uncertainty in the analysis was the pressure conditions in the water behind the penetrator in which frictionless, steady-state flow, in a uniform diameter pipe was assumed. A more sophisticated and realistic pressure condition has been formulated and will be implemented in the computer code in the near future
20. Caesium-137 in sandy sediments of the River Loire (FR): Assessment of an alluvial island evolving over the last 50 years
Energy Technology Data Exchange (ETDEWEB)
Detriche, Sebastien; Rodrigues, Stephane; Macaire, Jean-Jacques; Breheret, Jean-Gabriel; Bakyono, Jean-Paul [Universite Francois-Rabelais de Tours, CNRS/INSU UMR 6113 ISTO, Universite d' Orleans Faculte des Sciences et Techniques, Laboratoire de Geologie des Environnements Aquatiques Continentaux, Parc de Grandmont, 37200 Tours (France); Bonte, Philippe [UMR CNRS-CEA 1572, Laboratoire des Sciences du Climat et de l' Environnement - LSCE, CNRS, Domaine du CNRS, Bat. 12, 91198 Gif-sur-Yvette (France); Juge, Philippe [CETU-Elmis ingenieries, Antenne Universitaire en Val de Vienne, 11 quai Danton, 37500 Chinon (France)
2010-07-01
Recent sedimentological and morphological evolution of an island in the River Loire (FR) was investigated using the {sup 137}Cs method. This study describes the morphological adjustment of the island in the last 50 years, which corresponds to the increased bed incision of this sandy, multiple-channel environment because of, among other things, the increase in sediment extraction up to 1995. The results show that some {sup 137}Cs can be retained by sandy particles, potentially in clay minerals forming weathering features included in detrital sand grains. From a morphological perspective, significant lateral erosion can be observed in the upstream part of the island, while a weak lateral accretion occurs in its downstream section. Data about {sup 137}Cs and aerial photographs show that the morphology of the island margins has undergone significant changes leading to a lateral migration, while the centre of the island has remained relatively stable or is slowly eroding. The migration of the island depends on: (1) the withdrawal of inherited pre-incision morphological units, such as levees, or the development of new units, such as a channel shelf; (2) water and sediment supply from surrounding channels during flood events; (3) preferential sediment trapping (20 mm year{sup -1}) from the presence of riparian vegetation on the bank of the secondary channel that is subject to narrowing. The sedimentological and morphological response of the island in the context of incision of the Loire river bed is expressed mainly by lateral migration and secondarily by a low vertical adjustment. (authors)
1. Caesium-137 in sandy sediments of the River Loire (FR): Assessment of an alluvial island evolving over the last 50 years
International Nuclear Information System (INIS)
Detriche, Sebastien; Rodrigues, Stephane; Macaire, Jean-Jacques; Breheret, Jean-Gabriel; Bakyono, Jean-Paul; Bonte, Philippe; Juge, Philippe
2010-01-01
Recent sedimentological and morphological evolution of an island in the River Loire (FR) was investigated using the 137 Cs method. This study describes the morphological adjustment of the island in the last 50 years, which corresponds to the increased bed incision of this sandy, multiple-channel environment because of, among other things, the increase in sediment extraction up to 1995. The results show that some 137 Cs can be retained by sandy particles, potentially in clay minerals forming weathering features included in detrital sand grains. From a morphological perspective, significant lateral erosion can be observed in the upstream part of the island, while a weak lateral accretion occurs in its downstream section. Data about 137 Cs and aerial photographs show that the morphology of the island margins has undergone significant changes leading to a lateral migration, while the centre of the island has remained relatively stable or is slowly eroding. The migration of the island depends on: (1) the withdrawal of inherited pre-incision morphological units, such as levees, or the development of new units, such as a channel shelf; (2) water and sediment supply from surrounding channels during flood events; (3) preferential sediment trapping (20 mm year -1 ) from the presence of riparian vegetation on the bank of the secondary channel that is subject to narrowing. The sedimentological and morphological response of the island in the context of incision of the Loire river bed is expressed mainly by lateral migration and secondarily by a low vertical adjustment. (authors)
2. Vertical suspsended sediment fluxes observed from ocean gliders
Science.gov (United States)
Merckelbach, Lucas; Carpenter, Jeffrey
2016-04-01
Many studies trying to understand a coastal system in terms of sediment transport paths resort to numerical modelling - combining circulation models with sediment transport models. Two aspects herein are crucial: sediment fluxes across the sea bed-water column interface, and the subsequent vertical mixing by turbulence. Both aspects are highly complex and have relatively short time scales, so that the processes involved are implemented in numerical models as parameterisations. Due to the effort required to obtain field observations of suspended sediment concentrations (and other parameters), measurements are scarce, which makes the development and tuning of parameterisations a difficult task. Ocean gliders (autonomous underwater vehicles propelled by a buoyancy engine) provide a platform complementing more traditional methods of sampling. In this work we present observations of suspended sediment concentration (SSC) and dissipation rate taken by two gliders, each equipped with optical sensors and a microstructure sensor, along with current observations from a bottom mounted ADCP, all operated in the German Bight sector of the North Sea in Summer 2014. For about two weeks of a four-week experiment, the gliders were programmed to fly in a novel way as Lagrangian profilers to water depths of about 40 m. The benefit of this approach is that the rate of change of SSC - and other parameters - is local to the water column, as opposed to an unknown composition of temporal and spatial variability when gliders are operated in the usual way. Therefore, vertical sediment fluxes can be calculated without the need of the - often dubious - assumption that spatial variability can be neglected. During the experiment the water column was initially thermally stratified, with a cross-pycnocline diffusion coefficient estimated at 7\\cdot10-5 m2 s-1. Halfway through the experiment the remnants of tropical storm Bertha arrived at the study site and caused a complete mixing of the water
3. Lasting Impact of a Tsunami Event on Sediment-Organism Interactions in the Ocean
Science.gov (United States)
Seike, Koji; Sassa, Shinji; Shirai, Kotaro; Kubota, Kaoru
2018-02-01
Although tsunami sedimentation is a short-term phenomenon, it may control the long-term benthic environment by altering seafloor surface characteristics such as topography and grain-size composition. By analyzing sediment cores, we investigated the long-term effect of the 2011 tsunami generated by the Tohoku Earthquake off the Pacific coast of Japan on sediment mixing (bioturbation) by an important ecosystem engineer, the heart urchin Echinocardium cordatum. Recent tsunami deposits allow accurate estimation of the depth of current bioturbation by E. cordatum, because there are no preexisting burrows in the sediments. The in situ hardness of the substrate decreased significantly with increasing abundance of E. cordatum, suggesting that echinoid bioturbation softens the seafloor sediment. Sediment-core analysis revealed that this echinoid rarely burrows into the coarser-grained (medium-grained to coarse-grained) sandy layer deposited by the 2011 tsunami; thus, the vertical grain-size distribution resulting from tsunami sedimentation controls the depth of E. cordatum bioturbation. As sandy tsunami layers are preserved in the seafloor substrate, their restriction on bioturbation continues for an extended period. The results demonstrate that understanding the effects on seafloor processes of extreme natural events that occur on geological timescales, including tsunami events, is important in revealing continuing interactions between seafloor sediments and marine benthic invertebrates.
4. Sediment Lofting From Melt-Water Generated Turbidity Currents During Heinrich Events as a Tool to Assess Main Sediment Delivery Phases to Small Subpolar Ocean Basins
Science.gov (United States)
Hesse, R.
2009-05-01
Small subpolar ocean basins such as the Labrador Sea received a major portion (25%) of their sediment fill during the Pleistocene glaciations (less than 5% of the basin's lifetime), but the detailed timing of sediment supply to the basin remained essentially unknown until recently. The main sediment input into the basin was probably not coupled to major glacial cycles and associated sea-level changes but was related to Heinrich events. Discovery of the depositional facies of fine-grained lofted sediment provides a tool which suggests that the parent-currents from which lofting took place may have been sandy-gravelly turbidity currents that built a huge braided abyssal plain in the Labrador Sea (700 by 120 km underlain by 150 m on average of coarse- grained sediment) which is one of the largest sand accumulations (104 km3) on Earth. The facies of lofted sediment consists of stacked layers of graded muds that contain ice-rafted debris (IRD) which impart a bimodal grain-size distribution to the graded muds. The texturally incompatible grain populations of the muds (median size between 4 and 8 micrometers) and the randomly distributed coarse silt and sand-sized IRD require the combination of two transport processes that delivered the populations independently and allowed mixing at the depositional site: (i) sediment rafting by icebergs (dropstones) and (ii) the rise of turbid freshwater plumes out of fresh-water generated turbidity currents. Sediment lofting from turbidity currents is a process that occurs in density currents generated from sediment-laden fresh-water discharges into the sea that can produce reversed buoyancy, as is well known from experiments. When the flows have traveled long enough, their tops will have lost enough sediment by settling so that they become hypopycnal (their density decreasing below that of the ambient seawater) causing the current tops to lift up. The turbid fresh-water clouds buoyantly rise out of the turbidity current to a level of
5. Measurement of penetration depths of plutonium and americium in sediment from the ocean floor
International Nuclear Information System (INIS)
Fried, S.; Friedman, A.; Hines, J.; Sjoblom, R.; Schmitz, G.; Schreiner, F.
1979-01-01
The clay-like sediment covering the ocean floor constitutes the last barrier that shields the biosphere from contamination by radionuclides stemming from the nuclear wastes of a subseabed repository. In the event of a failure of the engineered barriers the mobility of the released radionuclides in the sediment determines the rate and the extent of entry into the water of the ocean. The initial results of measurements designed to determine the mobility of transuranium elements in sediment from the ocean floor are presented. Data indicate very low migration rates and imply strong chemisorptive interaction with the sediment
6. The Light-Field of Microbenthic Communities - Radiance Distribution and Microscale Optics of Sandy Coastal Sediments Rid A-1977-2009
DEFF Research Database (Denmark)
KUHL, M.; JØRGENSEN, BB
1994-01-01
radiance distribution. Comparison of light fields in wet and dry quartz sand showed that the lower refractive index of air than of water caused a more forward-biased scattering in wet sand. Light penetration was therefore deeper and surface irradiance reflectance was lower in wet sand than in dry sand......The light field in coastal sediments was investigated at a spatial resolution of 0.2-0.5 mm by spectral measurements (450-850 nm) of field radiance and scalar irradiance using fiber-optic microprobes. Depth profiles of field radiance were measured with radiance microprobes at representative angles...... relative to vertically incident collimated light in rinsed quartz sand and in a coastal sandy sediment colonized by microalgae. Upwelling and downwelling components of irradiance and scalar irradiance were calculated from the radiance distributions. Calculated total scalar irradiance agreed well...
7. Eddy correlation measurements of oxygen uptake in deep ocean sediments
DEFF Research Database (Denmark)
Berg, P.; Glud, Ronnie Nøhr; Hume, A.
2010-01-01
.62 +/- 0.23 (SE, n = 7), 1.65 +/- 0.33 (n = 2), and 1.43 +/- 0.15 (n = 25) mmol m(-2) d(-1). The very good agreement between the eddy correlation flux and the chamber flux serves as a new, important validation of the eddy correlation technique. It demonstrates that the eddy correlation instrumentation......Abstract: We present and compare small sediment-water fluxes of O-2 determined with the eddy correlation technique, with in situ chambers, and from vertical sediment microprofiles at a 1450 m deep-ocean site in Sagami Bay, Japan. The average O-2 uptake for the three approaches, respectively, was 1...... available today is precise and can resolve accurately even very small benthic O-2 fluxes. The correlated fluctuations in vertical velocity and O-2 concentration that give the eddy flux had average values of 0.074 cm s(-1) and 0.049 mu M. The latter represents only 0.08% of the 59 mu M mean O-2 concentration...
8. Ocean floor sediment as a repository barrier: comparative diffusion data for selected radionuclides in sediments from the Atlantic and Pacific Oceans
International Nuclear Information System (INIS)
Schreiner, F.; Sabau, C.; Friedman, A.; Fried, S.
1986-01-01
Effective diffusion coefficients for selected radionuclides have been measured in ocean floor sediments to provide data for the assessment of barrier effectiveness in subseabed repositories for nuclear waste. The sediments tested include illite-rich and smectite-rich red clays from the mid plate gyre region of the Pacific Ocean, reducing sediment from the continental shelf of the northwest coast of North America, and Atlantic Ocean sediments from the Southern Nares Abyssal Plain and the Great Meteor East region. Results show extremely small effective diffusion coefficients with values less than 10 -14 m 2 s -1 for plutonium, americium, curium, thorium, and tin. Radionuclides with high diffusion coefficients of approximately 10 -10 m 2 s - include the anionic species pertechnetate, TcO 4 - , iodide, I - , and selenite, SO 3 -2 . Uranyl(VI) and neptunyl(V) ions, which are stable in solution, have diffusion coefficients around 10 -12 m 2 s -1 . The diffusion behavior of most radionuclides is similar in the oxygenated Pacific sediments and in the anoxic sediments from the Atlantic. An exception is neptunium, which is immobilized by Great Meteor East sediment, but has high mobility in Southern Nares Abyssal Plain sediment. Under stagnant conditions a 30 m thick sediment layer forms an effective geologic barrier isolating radionuclides in a subseabed repository from the biosphere
9. Ocean floor sediment as a repository barrier: comparative diffusion data for selected radionuclides in sediments from the Atlantic and Pacific Oceans
Energy Technology Data Exchange (ETDEWEB)
Schreiner, F.; Sabau, C.; Friedman, A.; Fried, S.
1985-01-01
Effective diffusion coefficients for selected radionuclides have been measured in ocean floor sediments to provide data for the assessment of barrier effectiveness in subseabed repositories for nuclear waste. The sediments tested include illite-rich and smectite-rich red clays from the mid-plate gyre region of the Pacific Ocean, reducing sediment from the continental shelf of the northwest coast of North America, and Atlantic Ocean sediments from the Southern Nares Abyssal Plain and the Great Meteor East region. Results show extremely small effective diffusion coefficients with values less than 10/sup -14/ m/sup 2/s/sup -1/ for plutonium, americium, curium, thorium, and tin. Radionuclides with high diffusion coefficients of approximately 10/sup -10/ m/sup 2/s/sup -1/ include the anionic species pertechnetate, TcO/sub 4//sup -/, iodide, I/sup -/, and selenite, SeO/sub 3//sup -2/. Uranyl(VI) and neptunyl(V) ions, which are stable in solution, have diffusion coefficients around 10/sup -12/ m/sup 2/s/sup -1/. The diffusion behavior of most radionuclides is similar in the oxygenated Pacific sediments and in the anoxic sediments from the Atlantic. An exception is neptunium, which is immobilized by Great Meteor East sediment, but has high mobility in Southern Nares Abyssal Plain sediment. Under stagnant conditions a 30 m thick sediment layer forms an effective geologic barrier isolating radionuclides in a subseabed repository from the biosphere. 13 refs., 5 figs., 1 tab.
10. Ocean floor sediment as a repository barrier: comparative diffusion data for selected radionuclides in sediments from the Atlantic and Pacific Oceans
International Nuclear Information System (INIS)
Schreiner, F.; Sabau, C.; Friedman, A.; Fried, S.
1985-01-01
Effective diffusion coefficients for selected radionuclides have been measured in ocean floor sediments to provide data for the assessment of barrier effectiveness in subseabed repositories for nuclear waste. The sediments tested include illite-rich and smectite-rich red clays from the mid-plate gyre region of the Pacific Ocean, reducing sediment from the continental shelf of the northwest coast of North America, and Atlantic Ocean sediments from the Southern Nares Abyssal Plain and the Great Meteor East region. Results show extremely small effective diffusion coefficients with values less than 10 -14 m 2 s -1 for plutonium, americium, curium, thorium, and tin. Radionuclides with high diffusion coefficients of approximately 10 -10 m 2 s -1 include the anionic species pertechnetate, TcO 4 - , iodide, I - , and selenite, SeO 3 -2 . Uranyl(VI) and neptunyl(V) ions, which are stable in solution, have diffusion coefficients around 10 -12 m 2 s -1 . The diffusion behavior of most radionuclides is similar in the oxygenated Pacific sediments and in the anoxic sediments from the Atlantic. An exception is neptunium, which is immobilized by Great Meteor East sediment, but has high mobility in Southern Nares Abyssal Plain sediment. Under stagnant conditions a 30 m thick sediment layer forms an effective geologic barrier isolating radionuclides in a subseabed repository from the biosphere. 13 refs., 5 figs., 1 tab
11. Microseisms from Superstorm Sandy
Science.gov (United States)
Sufri, Oner; Koper, Keith D.; Burlacu, Relu; de Foy, Benjamin
2014-09-01
We analyzed and visualized the microseisms generated by Superstorm Sandy as recorded by the Earthscope Transportable Array (TA) during late October through early November of 2012. We applied continuous, frequency-dependent polarization analysis to the data and were able to track the course of Sandy as it approached the Florida coastline and, later, the northeastern coast of the U.S. The energy level of Sandy was roughly comparable to the background microseism level generated by wave-wave interactions in the North Atlantic and North Pacific oceans. The maximum microseismic power and degree of polarization were observed across the TA when Sandy sharply changed its direction to the west-northwest (specifically, towards Long Island, New York) on October 29. The westward turn also briefly changed the dominant microseism period from 5 s to 8 s. We identified three other microseismic source regions during the 18 day observation period. In particular, peak-splitting in the double frequency band and the orientation of the 5 s and 8 s polarization vectors revealed two contemporaneous microseism sources, one in the North Atlantic and one in the Northeast Pacific, for the dates of November 3-4. Predictions of microseismic excitation based on ocean wave models showed consistency with the observed microseismic energy generated by Sandy and other storms.
12. Heavy mineral analysis for assessing the provenance of sandy sediment in the San Francisco Bay Coastal System
Science.gov (United States)
Wong, Florence L.; Woodrow, Donald L.; McGann, Mary
2013-01-01
Heavy or high-specific gravity minerals make up a small but diagnostic component of sediment that is well suited for determining the provenance and distribution of sediment transported through estuarine and coastal systems worldwide. By this means, we see that surficial sand-sized sediment in the San Francisco Bay Coastal System comes primarily from the Sierra Nevada and associated terranes by way of the Sacramento and San Joaquin Rivers and is transported with little dilution through the San Francisco Bay and out the Golden Gate. Heavy minerals document a slight change from the strictly Sierran-Sacramento mineralogy at the confluence of the two rivers to a composition that includes minor amounts of chert and other Franciscan Complex components west of Carquinez Strait. Between Carquinez Strait and the San Francisco Bar, Sierran sediment is intermingled with Franciscan-modified Sierran sediment. The latter continues out the Gate and turns southward towards beaches of the San Francisco Peninsula. The Sierran sediment also fans out from the San Francisco Bar to merge with a Sierran province on the shelf in the Gulf of the Farallones. Beach-sand sized sediment from the Russian River is transported southward to Point Reyes where it spreads out to define a Franciscan sediment province on the shelf, but does not continue southward to contribute to the sediment in the Golden Gate area.
13. Assessing the impact of Hurricanes Irene and Sandy on the morphology and modern sediment thickness on the inner continental shelf offshore of Fire Island, New York
Science.gov (United States)
Schwab, William C.; Baldwin, Wayne E.; Denny, Jane F.
2016-01-15
This report documents the changes in seabed morphology and modern sediment thickness detected on the inner continental shelf offshore of Fire Island, New York, before and after Hurricanes Irene and Sandy made landfall. Comparison of acoustic backscatter imagery, seismic-reflection profiles, and bathymetry collected in 2011 and in 2014 show that sedimentary structures and depositional patterns moved alongshore to the southwest in water depths up to 30 meters during the 3-year period. The measured lateral offset distances range between about 1 and 450 meters with a mean of 20 meters. The mean distances computed indicate that change tended to decrease with increasing water depth. Comparison of isopach maps of modern sediment thickness show that a series of shoreface-attached sand ridges, which are the dominant sedimentary structures offshore of Fire Island, migrated toward the southwest because of erosion of the ridge crests and northeast-facing flanks as well as deposition on the southwest-facing flanks and in troughs between individual ridges. Statistics computed suggest that the modern sediment volume across the about 81 square kilometers of common sea floor mapped in both surveys decreased by 2.8 million cubic meters, which is a mean change of –0.03 meters, which is smaller than the resolution limit of the mapping systems used.
14. Reconciling surface ocean productivity, export fluxes and sediment composition in a global biogeochemical ocean model
Directory of Open Access Journals (Sweden)
M. Gehlen
2006-01-01
Full Text Available This study focuses on an improved representation of the biological soft tissue pump in the global three-dimensional biogeochemical ocean model PISCES. We compare three parameterizations of particle dynamics: (1 the model standard version including two particle size classes, aggregation-disaggregation and prescribed sinking speed; (2 an aggregation-disaggregation model with a particle size spectrum and prognostic sinking speed; (3 a mineral ballast parameterization with no size classes, but prognostic sinking speed. In addition, the model includes a description of surface sediments and organic carbon early diagenesis. Model output is compared to data or data based estimates of ocean productivity, pe-ratios, particle fluxes, surface sediment bulk composition and benthic O2 fluxes. Model results suggest that different processes control POC fluxes at different depths. In the wind mixed layer turbulent particle coagulation appears as key process in controlling pe-ratios. Parameterization (2 yields simulated pe-ratios that compare well to observations. Below the wind mixed layer, POC fluxes are most sensitive to the intensity of zooplankton flux feeding, indicating the importance of zooplankton community composition. All model parameters being kept constant, the capability of the model to reproduce yearly mean POC fluxes below 2000 m and benthic oxygen demand does at first order not dependent on the resolution of the particle size spectrum. Aggregate formation appears essential to initiate an intense biological pump. At great depth the reported close to constant particle fluxes are most likely the result of the combined effect of aggregate formation and mineral ballasting.
15. Relationship between chemical composition and magnetic susceptibility in sediment cores from Central Indian Ocean Basin
Digital Repository Service at National Institute of Oceanography (India)
Pattan, J.N.; Parthiban, G.; Banakar, V.K.; Tomer, A.; Kulkarni, M.
Three sediment cores in a north–south transect (3 degrees N to 13 degrees S) from different sediment types of the Central Indian Ocean Basin (CIOB) are studied to understand the possible relationship between magnetic susceptibility (Chi) and Al, Fe...
16. Creep of ocean sediments resulting from the isolation of radioactive wastes
International Nuclear Information System (INIS)
Dawson, P.R.; Chavez, P.F.; Lipkin, J.; Silva, A.J.
1980-01-01
Predictive models for the creep of deep ocean sediments resulting from the disposal of radioactive wastes are presented and preliminary observations of a program for evaluation of creep constitutive equation parameters are discussed. The models are used to provide calculated response of sediments under waste disposal conditions
17. Comparison of the anaerobic microbiota of deep-water Geodia spp. and sandy sediments in the Straits of Florida.
Science.gov (United States)
Brück, Wolfram M; Brück, Thomas B; Self, William T; Reed, John K; Nitecki, Sonja S; McCarthy, Peter J
2010-05-01
Marine sediments and sponges may show steep variations in redox potential, providing niches for both aerobic and anaerobic microorganisms. Geodia spp. and sediment specimens from the Straits of Florida were fixed using paraformaldehyde and 95% ethanol (v/v) for fluorescence in situ hybridization (FISH). In addition, homogenates of sponge and sediment samples were incubated anaerobically on various cysteine supplemented agars. FISH analysis showed a prominent similarity of microbiota in sediments and Geodia spp. samples. Furthermore, the presence of sulfate-reducing and annamox bacteria as well as other obligate anaerobic microorganisms in both Geodia spp. and sediment samples were also confirmed. Anaerobic cultures obtained from the homogenates allowed the isolation of a variety of facultative anaerobes, primarily Bacillus spp. and Vibrio spp. Obligate anaerobes such as Desulfovibrio spp. and Clostridium spp. were also found. We also provide the first evidence for a culturable marine member of the Chloroflexi, which may enter into symbiotic relationships with deep-water sponges such as Geodia spp. Resuspended sediment particles, may provide a source of microorganisms able to associate or form a symbiotic relationship with sponges.
18. Punctuated Sediment Input into Small Subpolar Ocean Basins During Heinrich Events and Preservation in the Stratigraphic Record
Science.gov (United States)
Hesse, R.
2006-12-01
generated from fresh-water discharges into the sea that can produce reversed buoyancy, as is well known from experiments. When the flows have traveled long enough, their tops will have lost enough sediment by settling such that their density decreases below that of the ambient seawater causing the current tops to lift up. The turbid fresh-water clouds buoyantly rise out of the turbidity current to a level of equal density, presumably the pycnocline, where they spread out laterally, even up-current, and generate interflows that deposit graded layers. The process is slow enough to allow incorporation into the graded layers of debris melting out of drifting icebergs. The observed lofted depositional facies is exclusively found in Heinrich layers. The most likely candidates for the parent currents from which lofting occurred were the sandy flows that formed the sand abyssal plain. Through this stratigraphic relationship the lofted facies ties the main pulses of Late Pleistocene sediment supply in the Labrador Basin to Heinrich events. Dating of pelagic interlayers during future ocean drilling may provide the proof that packages of sand turbidites underlying the abyssal plain are correlated to individual Heinrich events. The correlation may thus be documented in the stratigraphic record. Similar situations may exist in the Bering Sea or along the Maury Channel System in North Atlantic.
19. High rates of microbial carbon turnover in sediments in the deepest oceanic trench on Earth
DEFF Research Database (Denmark)
Glud, Ronnie N.; Wenzhoefer, Frank; Middelboe, Mathias
2013-01-01
Microbes control the decomposition of organic matter in marine sediments. Decomposition, in turn, contributes to oceanic nutrient regeneration and influences the preservation of organic carbon(1). Generally, rates of benthic decomposition decline with increasing water depth, although given the vast...... extent of the abyss, deep-sea sediments are quantitatively important for the global carbon cycle(2,3). However, the deepest regions of the ocean have remained virtually unexplored(4). Here, we present observations of microbial activity in sediments at Challenger Deep in the Mariana Trench in the central...
20. Geotechnical properties of deep-ocean sediments: a critical state approach
International Nuclear Information System (INIS)
Ho, E.W.L.
1988-11-01
The possible disposal of high-level radioactive waste using the sediments of the deep-ocean floor as repositories has initiated research to establish an understanding of the fundamental behaviour of deep-ocean sediments. The work described in this thesis consisted of a series of triaxial stress path tests using microcomputer controlled hydraulic triaxial cells to investigate the strength and stress-strain behaviour for mainly anisotropically (K o ) consolidated 'undisturbed' (tubed) and reconstituted specimens of deep-ocean sediments taken from two study areas in the North Atlantic Ocean. The test results have been analysed within the framework of critical state soil mechanics to investigate sediment characteristics such as the state boundary surface, drained and undrained strength and stress-strain behaviour. While marked anisotropic behaviour is found in a number of respects, the results indicate that analysis in a critical state framework is as valid as for terrestrial sediments. Differences in behaviour between tubed and reconstituted specimens have been observed and the effect of the presence of carbonate has been investigated. An attempt has been made to develop an elasto-plastic constitutive K o model based on critical state concepts. This model has been found to agree reasonably well with experimental data for kaolin and deep-ocean sediments. (author)
1. Deep ocean ventilation, carbon isotopes, marine sedimentation and the deglacial CO2 rise
Directory of Open Access Journals (Sweden)
C. Heinze
2011-07-01
Full Text Available The link between the atmospheric CO2 level and the ventilation state of the deep ocean is an important building block of the key hypotheses put forth to explain glacial-interglacial CO2 fluctuations. In this study, we systematically examine the sensitivity of atmospheric CO2 and its carbon isotope composition to changes in deep ocean ventilation, the ocean carbon pumps, and sediment formation in a global 3-D ocean-sediment carbon cycle model. Our results provide support for the hypothesis that a break up of Southern Ocean stratification and invigorated deep ocean ventilation were the dominant drivers for the early deglacial CO2 rise of ~35 ppm between the Last Glacial Maximum and 14.6 ka BP. Another rise of 10 ppm until the end of the Holocene is attributed to carbonate compensation responding to the early deglacial change in ocean circulation. Our reasoning is based on a multi-proxy analysis which indicates that an acceleration of deep ocean ventilation during early deglaciation is not only consistent with recorded atmospheric CO2 but also with the reconstructed opal sedimentation peak in the Southern Ocean at around 16 ka BP, the record of atmospheric δ13CCO2, and the reconstructed changes in the Pacific CaCO3 saturation horizon.
2. Linking Arenicola marina irrigation behavior to oxygen transport and dynamics in sandy sediments
DEFF Research Database (Denmark)
Timmermann, Karen; Banta, Gary T.; Glud, Ronnie Nøhr
2007-01-01
In this study we examine how the irrigation behavior of the common lugworm Arenicola marina affects the distribution, transport and dynamics of oxygen in sediments using microelectrodes, planar optodes and diagenetic modeling. The irrigation pattern was characterized by a regular recurring period...... and only in rare situations with very high pumping rates (>200 ml h-1) and/or a narrow feeding funnel (water....... concentration in the burrow was high (80% air saturation) and oxygen was detected at distances up to 0.7 mm from the burrow wall. Volume specific oxygen consumption rates calculated from measured oxygen profiles were up to 4 times higher for sediments surrounding worm burrows as compared to surface sediments....... Model results indicated that oxygen consumption also was higher in the feeding pocket/funnel compared to the activity in surface sediments. An oxygen budget revealed that 49% of the oxygen pumped into the burrow during lugworm irrigation was consumed by the worm itself while 23% supported the diffusive...
3. Transport and deposition of plutonium in the ocean: Evidence from Gulf of Mexico sediments
International Nuclear Information System (INIS)
Scott, M.R.; Salter, P.F.; Halverson, J.E.
1983-01-01
A study of sediments in the Gulf of Mexico shows dramatic gradients in Pu content and isotope ratios from the continental shelf to the Sigsbee Abyssal Plain. In terms of predicted direct fallout inventory of Pu, one shelf core contains 745% of the predicted inventory, while abyssal plain sediments contain only 15-20% of the predicted value. Absolute Pu concentrations of shelf sediments are also conspicuously high, up to 110 dpm/kg, compared to 13.5 dpm/kg in Mississippi River suspended sediment. There is no evidence of Pu remobilization in Gulf of Mexico shelf sediments, based on comparison of Pu profiles with Mn/Al and Fe/Al profiles. Horizontal transport of fallout nuclides from the open ocean to removal sites in ocean margin sediments is concluded to be the source of both the high concentrations and high inventories of Pu reported here. The shelf sediments show 240 Pu/ 239 Pu ratios close to 0.179, the average stratospheric fallout value, but the ratios decrease progressively across the Gulf to low values of 0.06 in abyssal plain sediments. The source of low-ratio Pu in deep-water sediments may be debris from low yield tests transported in the troposphere. Alternatively, it may represent a fraction of the Pu from global stratospheric fallout which has been separated in the water column from the remainder of the Pu in the ocean. In either case, the low-ratio material must have been removed rapidly to the sea floor where it composes a major fraction of the Pu in abyssal plain sediments. Pu delivered by global atmospheric fallout from the stratosphere has apparently remained for the most part in the water or has been transported horizontally and removed into shallow-water sediments. (orig.)
4. Intrinsic rates of petroleum hydrocarbon biodegradation in Gulf of Mexico intertidal sandy sediments and its enhancement by organic substrates
International Nuclear Information System (INIS)
Mortazavi, Behzad; Horel, Agota; Beazley, Melanie J.; Sobecky, Patricia A.
2013-01-01
The rates of crude oil degradation by the extant microorganisms in intertidal sediments from a northern Gulf of Mexico beach were determined. The enhancement in crude oil degradation by amending the microbial communities with marine organic matter was also examined. Replicate mesocosm treatments consisted of: (i) controls (intertidal sand), (ii) sand contaminated with crude oil, (iii) sand plus organic matter, and (iv) sand plus crude oil and organic matter. Carbon dioxide (CO 2 ) production was measured daily for 42 days and the carbon isotopic ratio of CO 2 (δ 13 CO 2 ) was used to determine the fraction of CO 2 derived from microbial respiration of crude oil. Bacterial 16S rRNA clone library analyses indicated members of Actinobacteria, Bacteroidetes, and Chloroflexi occurred exclusively in control sediments whereas Alphaproteobacteria, Betaproteobacteria, Gammaproteobacteria, and Firmicutes occurred in both control and oil contaminated sediments. Members of the hydrocarbon-degrading genera Hydrocarboniphaga, Pseudomonas, and Pseudoxanthomonas were found primarily in oil contaminated treatments. Hydrocarbon mineralization was 76% higher in the crude oil amended with organic matter treatment compared to the rate in the crude oil only treatment indicating that biodegradation of crude oil in the intertidal zone by an extant microbial community is enhanced by input of organic matter
5. Intrinsic rates of petroleum hydrocarbon biodegradation in Gulf of Mexico intertidal sandy sediments and its enhancement by organic substrates
Energy Technology Data Exchange (ETDEWEB)
Mortazavi, Behzad [University of Alabama, Department of Biological Sciences, Box 870344, University of Alabama, Tuscaloosa, AL 35487 (United States); Dauphin Island Sea Lab, 101 Bienville Boulevard, Dauphin Island, AL, 36528 (United States); Horel, Agota [University of Alabama, Department of Biological Sciences, Box 870344, University of Alabama, Tuscaloosa, AL 35487 (United States); Dauphin Island Sea Lab, 101 Bienville Boulevard, Dauphin Island, AL, 36528 (United States); Beazley, Melanie J.; Sobecky, Patricia A. [University of Alabama, Department of Biological Sciences, Box 870344, University of Alabama, Tuscaloosa, AL 35487 (United States)
2013-01-15
The rates of crude oil degradation by the extant microorganisms in intertidal sediments from a northern Gulf of Mexico beach were determined. The enhancement in crude oil degradation by amending the microbial communities with marine organic matter was also examined. Replicate mesocosm treatments consisted of: (i) controls (intertidal sand), (ii) sand contaminated with crude oil, (iii) sand plus organic matter, and (iv) sand plus crude oil and organic matter. Carbon dioxide (CO{sub 2}) production was measured daily for 42 days and the carbon isotopic ratio of CO{sub 2} (δ{sup 13}CO{sub 2}) was used to determine the fraction of CO{sub 2} derived from microbial respiration of crude oil. Bacterial 16S rRNA clone library analyses indicated members of Actinobacteria, Bacteroidetes, and Chloroflexi occurred exclusively in control sediments whereas Alphaproteobacteria, Betaproteobacteria, Gammaproteobacteria, and Firmicutes occurred in both control and oil contaminated sediments. Members of the hydrocarbon-degrading genera Hydrocarboniphaga, Pseudomonas, and Pseudoxanthomonas were found primarily in oil contaminated treatments. Hydrocarbon mineralization was 76% higher in the crude oil amended with organic matter treatment compared to the rate in the crude oil only treatment indicating that biodegradation of crude oil in the intertidal zone by an extant microbial community is enhanced by input of organic matter.
6. Suspended sediment concentration and optical property observations of mixed-turbidity, coastal waters through multispectral ocean color inversion
Science.gov (United States)
Multispectral satellite ocean color data from high-turbidity areas of the coastal ocean contain information about the surface concentrations and optical properties of suspended sediments and colored dissolved organic matter (CDOM). Empirical and semi-analytical inversion algorit...
7. Organophosphate Ester Flame Retardants and Plasticizers in Ocean Sediments from the North Pacific to the Arctic Ocean.
Science.gov (United States)
Ma, Yuxin; Xie, Zhiyong; Lohmann, Rainer; Mi, Wenying; Gao, Guoping
2017-04-04
The presence of organophosphate ester (OPE) flame retardants and plasticizers in surface sediment from the North Pacific to Arctic Ocean was observed for the first time during the fourth National Arctic Research Expedition of China in the summer of 2010. The samples were analyzed for three halogenated OPEs [tris(2-chloroethyl) phosphate (TCEP), tris(1-chloro-2-propyl) phosphate (TCPP), and tris(dichloroisopropyl) phosphate], three alkylated OPEs [triisobutyl phosphate (TiBP), tri-n-butyl phosphate, and tripentyl phosphate], and triphenyl phosphate. Σ 7 OPEs (total concentration of the observed OPEs) was in the range of 159-4658 pg/g of dry weight. Halogenated OPEs were generally more abundant than the nonhalogenated OPEs; TCEP and TiBP dominated the overall concentrations. Except for that of the Bering Sea, Σ 7 OPEs values increased with increasing latitudes from Bering Strait to the Central Arctic Ocean, while the contributions of halogenated OPEs (typically TCEP and TCPP) to the total OPE profile also increased from the Bering Strait to the Central Arctic Ocean, indicating they are more likely to be transported to the remote Arctic. The median budget of 52 (range of 17-292) tons for Σ 7 OPEs in sediment from the Central Arctic Ocean represents only a very small amount of their total production volume, yet the amount of OPEs in Arctic Ocean sediment was significantly larger than the sum of polybrominated diphenyl ethers (PBDEs) in the sediment, indicating they are equally prone to long-range transport away from source regions. Given the increasing level of production and usage of OPEs as substitutes of PBDEs, OPEs will continue to accumulate in the remote Arctic.
8. Enrichments in authigenic uranium in glacial sediments of the Southern Ocean; Enrichissement en uranium authigene dans les sediments glaciaires de l'ocean Austral
Energy Technology Data Exchange (ETDEWEB)
Dezileau, L. [Universidad de Conception, Programa Regional de Oceanografia Fisica y Climat PROFC, Y Centro de Investigacion Oceanografica (Chile); Bareille, G. [Pau Univ., Lab. de Chimie Analytique Bio-Inorganique et Environnement, EP-CNRS 132, 64 (France); Reyss, J.L. [CEA Saclay, Direction des Sciences de la Matiere, Lab. des Sciences du Climat et de L' environnement, Lab. Mixte CEA-CNRS, 91 - Gif-sur-Yvette (France)
2002-11-01
Four sediment cores from the Polar frontal zone and the Antarctic zone in the Indian sector of the Southern Ocean present an increase of authigenic uranium during glacial periods. We show that this increase in uranium is due to a combination of (i) an increase in the lateral transport of organic matter, (ii) a decrease in the oxygen in deep waters, and (iii) a process of diagenesis. It appears that uranium concentration cannot be used as a proxy of paleo-productivity in the Southern Ocean, as previously suggested by Kumar et al. in 1995. (authors)
9. Functional structure of laminated microbial sediments from a supratidal sandy beach of the German Wadden Sea (St. Peter-Ording)
Science.gov (United States)
Bühring, Solveig I.; Kamp, Anja; Wörmer, Lars; Ho, Stephanie; Hinrichs, Kai-Uwe
2014-01-01
Hidden for the untrained eye through a thin layer of sand, laminated microbial sediments occur in supratidal beaches along the North Sea coast. The inhabiting microbial communities organize themselves in response to vertical gradients of light, oxygen or sulfur compounds. We performed a fine-scale investigation on the vertical zonation of the microbial communities using a lipid biomarker approach, and assessed the biogeochemical processes using a combination of microsensor measurements and a 13C-labeling experiment. Lipid biomarker fingerprinting showed the overarching importance of cyanobacteria and diatoms in these systems, and heterocyst glycolipids revealed the presence of diazotrophic cyanobacteria even in 9 to 20 mm depth. High abundance of ornithine lipids (OL) throughout the system may derive from sulfate reducing bacteria, while a characteristic OL profile between 5 and 8 mm may indicate presence of purple non-sulfur bacteria. The fate of 13C-labeled bicarbonate was followed by experimentally investigating the uptake into microbial lipids, revealing an overarching importance of cyanobacteria for carbon fixation. However, in deeper layers, uptake into purple sulfur bacteria was evident, and a close microbial coupling could be shown by uptake of label into lipids of sulfate reducing bacteria in the deepest layer. Microsensor measurements in sediment cores collected at a later time point revealed the same general pattern as the biomarker analysis and the labeling experiments. Oxygen and pH-microsensor profiles showed active photosynthesis in the top layer. The sulfide that diffuses from deeper down and decreases just below the layer of active oxygenic photosynthesis indicates the presence of sulfur bacteria, like anoxygenic phototrophs that use sulfide instead of water for photosynthesis.
10. Sediment tracing by customised' magnetic fingerprinting: from the sub-catchment to the ocean scale
Science.gov (United States)
Maher, B.
2009-04-01
Robust identification of catchment suspended sediment sources is a prerequisite both for understanding sediment delivery processes and targeting of effective mitigation measures. Fine sediment delivery can pose management problems, especially with regard to nutrient run-off and siltation of water courses and bodies. Suspended sediment load constitutes the dominant mode of particulate material loss from catchments but its transport is highly episodic. Identification of suspended sediment sources and fluxes is therefore a prerequisite both for understanding of fluvial geomorphic process and systems and for designing strategies to reduce sediment transport, delivery and yields. Here will be discussed sediment ‘fingerprinting', using the magnetic properties of soils and sediments to characterise sediment sources and transport pathways over a very wide variety of spatial scales, from Lake Bassenthwaite in the English Lake District to the Burdekin River in Queensland and even the North Atlantic Ocean during the last glacial maximum. The applicability of magnetic ‘fingerprinting' to such a range of scales and environments has been significantly improved recently through use of new and site-appropriate magnetic measurement techniques, statistical processing and sample treatment options.
11. Tropical to extratropical: Marine environmental changes associated with Superstorm Sandy prior to its landfall
Science.gov (United States)
Zambon, Joseph B.; He, Ruoying; Warner, John C.
2014-12-01
Superstorm Sandy was a massive storm that impacted the U.S. East Coast on 22-31 October 2012, generating large waves, record storm surges, and major damage. The Coupled Ocean-Atmosphere-Wave-Sediment Transport modeling system was applied to hindcast this storm. Sensitivity experiments with increasing complexity of air-sea-wave coupling were used to depict characteristics of this immense storm as it underwent tropical to extratropical transition. Regardless of coupling complexity, model-simulated tracks were all similar to the observations, suggesting the storm track was largely determined by large-scale synoptic atmospheric circulation, rather than by local processes resolved through model coupling. Analyses of the sea surface temperature, ocean heat content, and upper atmospheric shear parameters showed that as a result of the extratropical transition and despite the storm encountering much cooler shelf water, its intensity and strength were not significantly impacted. Ocean coupling was not as important as originally thought for Sandy.
12. Ammolagena clavata (Jones and Parker, 1860), an agglutinated benthic foraminiferal species - first report from the Recent sediments, Arabian Sea, Indian Ocean region
Digital Repository Service at National Institute of Oceanography (India)
Nigam, R.; Mazumder, A.; Saraswat, R.
The rare presence of the agglutinated foraminiferal species Ammolagena clavata is presented for the first time from the Recent sediments of the Indian Ocean region. This species has previously been reported in Recent sediments from all other oceans...
13. Sediment monitoring and benthic faunal sampling adjacent to the Sand Island ocean outfall, Oahu, Hawaii, 1986-2010 (NODC Accession 9900088)
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — Benthic fauna and sediment in the vicinity of the Sand Island ocean outfall were sampled from 1986-2010. To assess the environmental quality, sediment grain size and...
14. Sediment Monitoring and Benthic Faunal Sampling Adjacent to the Barbers Point Ocean Outfall, Oahu, Hawaii, 1986-2010 (NODC Accession 9900098)
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — Benthic fauna and sediment in the vicinity of the Barbers Point (Honouliuli) ocean outfall were sampled from 1986-2010. To assess the environmental quality, sediment...
15. Sediments and fossiliferous rocks from the eastern side of the Tongue of the Ocean, Bahamas
Science.gov (United States)
Gibson, T.G.; Schlee, J.
1967-01-01
In August 1966, two dives were made with the deep-diving submersible Alvin along the eastern side of the Tongue of the Ocean to sample the rock and sediment. Physiographically, the area is marked by steep slopes of silty carbonate sediment and precipitous rock cliffs dusted by carbonate debris. Three rocks, obtained from the lower and middle side of the canyon (914-1676 m depth), are late Miocene-early Pliocene to late Pleistocene-Recent in age; all are deep-water pelagic limestones. They show (i) that the Tongue of the Ocean has been a deep-water area at least back into the Miocene, and (ii) that much shallow-water detritus has been swept off neighbouring banks to be incorporated with the deep-water fauna in the sediment. ?? 1967 Pergamon Press Ltd.
16. Constitutive relationships for ocean sediments subjected to stress and temperature gradients
International Nuclear Information System (INIS)
Davies, T.G.; Banerjee, P.K.
1980-08-01
The disposal of low-level nuclear wastes by burial in deep sea sediments is an option currently being considered. This report lays the groundwork for an investigation of the stability of canisters containing nuclear wastes against movement due to fluidisation of the surrounding sediments, where such fluidisation may result from thermally induced stresses. The requisite constitutive relationships for ocean sediments under stress and temperature gradients are derived from the theory of critical state soil mechanics. A parametric survey has been made of the behaviour of an element of soil in order to assess various models and the importance of the governing parameters, The formulation of a finite element algorithm is given for the solution of the sediment stability problem. (author)
17. On the possible ''normalization'' of experimental curves of 230Th vertical distribution in abyssal oceanic sediments
International Nuclear Information System (INIS)
Kuznetsov, Yu.V.; Al'terman, Eh.I.; Lisitsyn, A.P.; AN SSSR, Moscow. Inst. Okeanologii)
1981-01-01
The possibilities of the method of normalization of experimental ionic curves in reference to dating of abyssal sediments and establishing their accumulation rapidities are studied. The method is based on using correlation between ionic curves extrema and variations of Fe, Mn, C org., and P contents in abyssal oceanic sediments. It has been found that the above method can be successfully applied for correction of 230 Th vertical distribution data obtained by low-background γ-spectrometry. The method leads to most reliable results in those cases when the vertical distribution curves in sediments of elements concentrators of 230 Th are symbasic between themselves. The normalization of experimental ionic curves in many cases gives the possibility to realize the sediment age stratification [ru
18. Fluvial fingerprints in northeast Pacific sediments: Unravelling terrestrial-ocean climate linkages
Science.gov (United States)
Vanlaningham, S. J.; Duncan, R.; Pisias, N.
2004-12-01
As the earth's climate history becomes better understood, it becomes clear that the terrestrial and oceanic systems interact in complex ways. This is seen in core sites offshore the Pacific Northwest (PNW) of North America. A correlation can be seen in oceanic biostratigraphic assemblages and down-core changes in terrestrial pollen types. However, it is difficult to determine whether this relationship is the result of a coupled migration of terrestrial vegetation and oceanic fauna on millennial timescales or the result of changes in ocean circulation patterns that create more complex pollen pathways to the core sites. This research begins to unravel the answers to this problem by examining down-core changes in sediment provenance on millennial timescales. Preliminary data characterize sediment of 24 rivers from ten geologic provinces between latitudes 36° N - 47° N. Through clay mineralogy, major and trace element geochemistry and Ar-Ar "province" ages, ten of the 24 rivers can be uniquely identified, while six of the ten geologic provinces can be uniquely constrained geochemically. With further Nd, Sr and Pb isotopic analyses, we hope to constrain the non-unique sediment sources. We will also be presenting initial down-core geochemical results from cores EW 9504-17PC and EW9504-13PC, offshore southern Oregon and central California, respectively.
19. Earthquakes drive large-scale submarine canyon development and sediment supply to deep-ocean basins.
Science.gov (United States)
Mountjoy, Joshu J; Howarth, Jamie D; Orpin, Alan R; Barnes, Philip M; Bowden, David A; Rowden, Ashley A; Schimel, Alexandre C G; Holden, Caroline; Horgan, Huw J; Nodder, Scott D; Patton, Jason R; Lamarche, Geoffroy; Gerstenberger, Matthew; Micallef, Aaron; Pallentin, Arne; Kane, Tim
2018-03-01
Although the global flux of sediment and carbon from land to the coastal ocean is well known, the volume of material that reaches the deep ocean-the ultimate sink-and the mechanisms by which it is transferred are poorly documented. Using a globally unique data set of repeat seafloor measurements and samples, we show that the moment magnitude ( M w ) 7.8 November 2016 Kaikōura earthquake (New Zealand) triggered widespread landslides in a submarine canyon, causing a powerful "canyon flushing" event and turbidity current that traveled >680 km along one of the world's longest deep-sea channels. These observations provide the first quantification of seafloor landscape change and large-scale sediment transport associated with an earthquake-triggered full canyon flushing event. The calculated interevent time of ~140 years indicates a canyon incision rate of 40 mm year -1 , substantially higher than that of most terrestrial rivers, while synchronously transferring large volumes of sediment [850 metric megatons (Mt)] and organic carbon (7 Mt) to the deep ocean. These observations demonstrate that earthquake-triggered canyon flushing is a primary driver of submarine canyon development and material transfer from active continental margins to the deep ocean.
20. The effects of post-accretion sedimentation on the magnetization of oceanic crust
Science.gov (United States)
Dyment, J.; Granot, R.
2016-12-01
The presence of marine magnetic anomalies related to seafloor spreading is often considered a key evidence to locate the continent-ocean boundary (COB) at passive margins. Conversely, thermal demagnetization is also advocated to explain the poor shape of such oceanic anomalies under thick sedimentary cover. To investigate the effects of post-accretion sedimentation on marine magnetic anomalies, we focus our study on two conjugate regions of the southern South Atlantic Ocean (Anomalies M4 to M0) that, although formed at the same time and along the same spreading segments, reveal contrasting characters. The anomalies exhibit strong amplitudes (>400 nT) and a well-marked shape off South Africa, where the sediments are less than 3 km-thick, but become weaker ( 200 nT) and much smoother off northern Argentina, where the sedimentary cover is thicker than 5 km. We interpret this observation as reflecting thermal demagnetization of the extrusive layer and its low Curie temperature titanomagnetite. We perform a series of thermo-magnetic models (Dyment and Arkani-Hamed, Geophys. J. Int., 1995, modified to include the sedimentary cover) to simulate the acquisition and loss of remanent magnetization in the oceanic lithosphere. We assume that most of the sediments accumulated shortly after crustal accretion. We investigate a range of possible thermal demagnetization temperatures for the extrusive layer and find that 200°C to 280ºC best explains the observations, in reasonable agreement with Curie temperatures of titanomagnetite, suggesting that most of the extrusive layer may be demagnetized under sediments thicker than 5 km. Thermal demagnetization should therefore be considered while interpreting marine magnetic anomalies for the age and nature of the crust (i.e., continental versus oceanic) in regions with thick sedimentary cover.
1. Creep of ocean sediments resulting from the isolation of radioactive wastes
International Nuclear Information System (INIS)
Dawson, P.R.; Chavez, P.F.; Lipkin, J.; Silva, A.J.
1983-01-01
Long-term disposal of high-level radioactive wastes in subseabed sediments requires that the sediments constitute the principal barrier to the release of radionuclides over very long times. In this chapter the development of the components for mathematical modelling of creep deformations of marine sediments is presented. This development includes formulation of the conservation equations and constitutive equations that describe coupled movement and heating of the fully saturated porous sediments. Numerical methods for solving the system of governing equations for complicated two-dimensional geometrics are discussed, and the program of laboratory tests for understanding the mechanical behavior of the ocean sediments is presented. Using properties taken from published literature on the creep of clays, two problems were analyzed to obtain preliminary estimates of the behavior. Analysis of cavity closure following emplacement showed that the sediment would flow around the canister before heating would significantly alter the temperature field. Large-scale motion caused by density gradients in the sediment was predicted to be small
2. Bibliography of sandy beaches and sandy beach organisms on the African continent
CSIR Research Space (South Africa)
Bally, R
1986-01-01
Full Text Available This bibliography covers the literature relating to sandy beaches on the African continent and outlying islands. The bibliography lists biological, chemical, geographical and geological references and covers shallow marine sediments, surf zones off...
3. Organic geochemistry of continental margin and deep ocean sediments
Energy Technology Data Exchange (ETDEWEB)
Whelan, J.K.; Hunt, J.M.; Eglinton, T.; Dickinson, P.; Johnson, C.; Buxton, L.; Tarafa, M.E.
1990-08-01
The objective of this research continues to be the understanding of the complex processes of fossil fuel formation and migration. DOE funded research to date has focused on case histories'' of down-hole well profiles of light hydrocarbons, pyrograms, pyrolysis-GC and -GCMS parameters, and biomarker data from wells in the Louisiana and Texas Gulf Coasts the Alaskan North Slope. In the case of the Alaskan North Slope, geological data and one-dimensional maturation modeling have been integrated in order to better constrain possible source rocks, timing, and migration routes for oil and gas generation and expulsion processes.This period, biomarker analyses and organic petrographic analyses were completed for the Ikpikpuk well. In the case of the Gulf Coast, we have obtained a one-dimensional maturation model of the Cost B-1 well in E. Cameron field of the Louisiana Gulf Coast. The completed E. Cameron data set adds to the enigma of the Gulf Coast oils found on the continental shelf of Louisiana. If significant quantities of the oil are coming from relatively organic lean Tertiary rocks, then non-conventional'' expulsion and migration mechanisms, such as gas dissolved in oil must be invoked to explain the Gulf Coast oils reservoired on the Louisiana continental shelf. We are designing and starting to assemble a hydrous pyrolysis apparatus to follow, the laboratory, rates of generation and expulsion of sediment gases. Initiation of some new research to examine {delta}{sup 13}C of individual compounds from pyrolysis is also described. We are beginning to examine both the laboratory and field data from the Gulf Coast in the context of a Global Basin Research Network (GBRN). The purpose is to better understand subsurface fluid flow processes over geologic time in sedimentary basins and their relation to resource accumulation (i.e., petroleum and metal ores). 58 refs.
4. Rare earth element geochemistry of oceanic ferromanganese nodules and associated sediments
Science.gov (United States)
Elderfield, H.; Hawkesworth, C. J.; Greaves, M. J.; Calvert, S. E.
1981-04-01
Analyses have been made of REE contents of a well-characterized suite of deep-sea (> 4000 m.) principally todorokite-bearing ferromanganese nodules and associated sediments from the Pacific Ocean. REE in nodules and their sediments are closely related: nodules with the largest positive Ce anomalies are found on sediments with the smallest negative Ce anomalies; in contrast, nodules with the highest contents of other rare earths (3 + REE) are found on sediments with the lowest 3 + REE contents and vice versa. 143Nd /144Nd ratios in the nodules (˜0.51244) point to an original seawater source but an identical ratio for sediments in combination with the REE patterns suggests that diagenetic reactions may transfer elements into the nodules. Analysis of biogenic phases shows that the direct contribution of plankton and carbonate and siliceous skeletal materials to REE contents of nodules and sediments is negligible. Inter-element relationships and leaching tests suggest that REE contents are controlled by a P-rich phase with a REE pattern similar to that for biogenous apatite and an Fe-rich phase with a pattern the mirror image of that for sea water. It is proposed that 3 + REE concentrations are controlled by the surface chemistry of these phases during diagenetic reactions which vary with sediment accumulation rate. Processes which favour the enrichment of transition metals in equatorial Pacific nodules favour the depletion of 3 + REE in nodules and enrichment of 3 + REE in associated sediments. In contrast, Ce appears to be added both to nodules and sediments directly from seawater and is not involved in diagenetic reactions.
5. Distribution of 137Cs in samples of ocean bottom sediments of the baltic sea in 1982-1983
International Nuclear Information System (INIS)
Gedenov, L.I.; Flegontov, V.M.; Ivanova, L.M.; Kostandov, K.A.
1986-01-01
The concentration of Cs-137 in samples of ocean bottom sediments picked up in 1979 in the Gulf of Finland with a geological nozzle pipe varied within a wide interval of values. The results could indicate nonuniformity of the Cs-137 distribution in ocean bottom sediments as well as the penetration of significant amounts of Cs-137 to large depths. The main error resulted from the sampling technique employed because the upper part of the sediment could be lost. In 1982, a special ground-sampling device, with which the upper layer of sediments in the water layer close to the ocean bottom could be sampled, was tested in the Gulf of Finland and the Northeastern part of the Baltic Sea. The results of a layerwise determination of the Cs-137 concentration in samples of ocean bottom sediments of the Gulf of Finland and of the Baltic Sea are listed. The new soil-sampling device for picking samples of ocean sediments of undisturbed stratification will allow a correct determination of the radionuclide accumulation in the upper layers of ocean bottom sediments in the Baltic Sea
6. Empirical evidence reveals seasonally dependent reduction in nitrification in coastal sediments subjected to near future ocean acidification
NARCIS (Netherlands)
Braeckman, U.; Van Colen, C.; Guilini, K.; Van Gansbeke, D.; Soetaert, K.; Vincx, M.; Vanaverbeke, J.
2014-01-01
Research so far has provided little evidence that benthic biogeochemical cycling is affected by ocean acidification under realistic climate change scenarios. We measured nutrient exchange and sediment community oxygen consumption (SCOC) rates to estimate nitrification in natural coastal permeable
7. Biological, physical, nutrients, sediment, and other data from sediment sampler-grab, bottle, and CTD casts in the Arabian Sea, Equatorial Pacific Ocean, Northeast Atlantic Ocean, and Southern Oceans as part of the Long Term Monitoring East-West Flower Garden Banks project from 08 January 1995 to 08 April 1998 (NODC Accession 0001155)
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — Biological, physical, nutrients, sediment, and other data were collected using sediment sampler-grab, bottle and CTD casts in the Arabian Sea, North/South Pacific...
8. Numerical modeling of pore-scale phenomena during CO2 sequestration in oceanic sediments
International Nuclear Information System (INIS)
Kang, Qinjun; Tsimpanogiannis, Ioannis N.; Zhang, Dongxiao; Lichtner, Peter C.
2005-01-01
Direct disposal of liquid CO 2 on the ocean floor is one of the approaches considered for sequestering CO 2 in order to reduce its concentration in the atmosphere. At oceanic depths deeper than approximately 3000 m, liquid CO 2 density is higher than the density of seawater and CO 2 is expected to sink and form a pool at the ocean floor. In addition to chemical reactions between CO 2 and seawater to form hydrate, fluid displacement is also expected to occur within the ocean floor sediments. In this work, we consider two different numerical models for hydrate formation at the pore scale. The first model consists of the Lattice Boltzmann (LB) method applied to a single-phase supersaturated solution in a constructed porous medium. The second model is based on the Invasion Percolation (IP) in pore networks, applied to two-phase immiscible displacement of seawater by liquid CO 2 . The pore-scale results are upscaled to obtain constitutive relations for porosity, both transverse and for the entire domain, and for permeability. We examine deposition and displacement patterns, and changes in porosity and permeability due to hydrate formation, and how these properties depend on various parameters including a parametric study of the effect of hydrate formation kinetics. According to the simulations, the depth of CO 2 invasion in the sediments is controlled by changes in the pore-scale porosity close to the hydrate formation front. (author)
9. Residual β activity of particulate 234Th as a novel proxy for tracking sediment resuspension in the ocean
Science.gov (United States)
Lin, Wuhui; Chen, Liqi; Zeng, Shi; Li, Tao; Wang, Yinghui; Yu, Kefu
2016-01-01
Sediment resuspension occurs in the global ocean, which greatly affects material exchange between the sediment and the overlying seawater. The behaviours of carbon, nutrients, heavy metals, and other pollutants at the sediment-seawater boundary will further link to climate change, eutrophication, and marine pollution. Residual β activity of particulate 234Th (RAP234) is used as a novel proxy to track sediment resuspension in different marine environments, including the western Arctic Ocean, the South China Sea, and the Southern Ocean. Sediment resuspension identified by high activity of RAP234 is supported by different lines of evidence including seawater turbidity, residence time of total 234Th, Goldschmidt’s classification, and ratio of RAP234 to particulate organic carbon. A conceptual model is proposed to elucidate the mechanism for RAP234 with dominant contributions from 234Th-238U and 212Bi-228Th. The ‘slope assumption’ for RAP234 indicated increasing intensity of sediment resuspension from spring to autumn under the influence of the East Asian monsoon system. RAP234 can shed new light on 234Th-based particle dynamics and should benefit the interpretation of historical 234Th-238U database. RAP234 resembles lithophile elements and has broad implications for investigating particle dynamics in the estuary-shelf-slope-ocean continuum and linkage of the atmosphere-ocean-sediment system. PMID:27252085
10. Residual β activity of particulate (234)Th as a novel proxy for tracking sediment resuspension in the ocean.
Science.gov (United States)
Lin, Wuhui; Chen, Liqi; Zeng, Shi; Li, Tao; Wang, Yinghui; Yu, Kefu
2016-06-02
Sediment resuspension occurs in the global ocean, which greatly affects material exchange between the sediment and the overlying seawater. The behaviours of carbon, nutrients, heavy metals, and other pollutants at the sediment-seawater boundary will further link to climate change, eutrophication, and marine pollution. Residual β activity of particulate (234)Th (RAP234) is used as a novel proxy to track sediment resuspension in different marine environments, including the western Arctic Ocean, the South China Sea, and the Southern Ocean. Sediment resuspension identified by high activity of RAP234 is supported by different lines of evidence including seawater turbidity, residence time of total (234)Th, Goldschmidt's classification, and ratio of RAP234 to particulate organic carbon. A conceptual model is proposed to elucidate the mechanism for RAP234 with dominant contributions from (234)Th-(238)U and (212)Bi-(228)Th. The 'slope assumption' for RAP234 indicated increasing intensity of sediment resuspension from spring to autumn under the influence of the East Asian monsoon system. RAP234 can shed new light on (234)Th-based particle dynamics and should benefit the interpretation of historical (234)Th-(238)U database. RAP234 resembles lithophile elements and has broad implications for investigating particle dynamics in the estuary-shelf-slope-ocean continuum and linkage of the atmosphere-ocean-sediment system.
11. Certified reference materials for radionuclides in Bikini Atoll sediment (IAEA-410) and Pacific Ocean sediment (IAEA-412)
DEFF Research Database (Denmark)
Pham, M. K.; van Beek, P.; Carvalho, F. P.
2016-01-01
The preparation and characterization of certified reference materials (CRMs) for radionuclide content in sediments collected offshore of Bikini Atoll (IAEA-410) and in the open northwest Pacific Ocean (IAEA-412) are described and the results of the certification process are presented. The certified...... radionuclides include: 40K, 210Pb (210Po), 226Ra, 228Ra, 228Th, 232Th, 234U, 238U, 239Pu, 239+240Pu and 241Am for IAEA-410 and 40K, 137Cs, 210Pb (210Po), 226Ra, 228Ra, 228Th, 232Th, 235U, 238U, 239Pu, 240Pu and 239+240Pu for IAEA-412. The CRMs can be used for quality assurance and quality control purposes...
12. Hydrothermal signature in the axial-sediments from the Carlsberg Ridge in the northwest Indian Ocean
Science.gov (United States)
Yu, Zenghui; Li, Huaiming; Li, Mengxing; Zhai, Shikui
2018-04-01
30 sediments grabbed from 24 sites between the equator and 10°N along the Carlsberg Ridge (CR) in the northwest Indian Ocean has been analyzed for bulk chemical compositions. Hydrothermal components in the sediments are identified and characterized. They mainly occur at 6.3°N as sulfide debris and at 3.6°N as both sulfide and high temperature water-rock interaction products. The enrichment of chalcophile elements such as Zn, Cu, Pb and the depletion of alkalis metals such as K and Rb are the typical features of hydrothermal components. High U/Fe, low (Nd/Yb)N and negative Ce anomaly infer the uptake of seawater in the hydrothermal deposits by oxidizing after deposition. However, the general enrichment of Mn in hydrothermal plumed-derived materials is not found in the sediments, which may indicate the limited diffusion of fluids or plumes, at least in the direction along the Carlsberg spreading center. The hydrothermal components show their similarity to the hydrothermal deposits from the Indian Ocean Ridge. At 3.6°N ultramafic rocks or gabbroic intrusions, may be involved in the hydrothermal system.
13. A numerical study on oceanic dispersion and sedimentation of radioactive cesium-137 from Fukushima Daiichi Nuclear Power Plant
International Nuclear Information System (INIS)
Higashi, Hironori; Morino, Yu; Ohara, Toshimasa
2014-01-01
We discussed a numerical model for oceanic dispersion and sedimentation of radioactive cesium-137 (Cs-137) in shallow water regions to clarify migration behavior of Cs-137 from Fukushima Daiichi Nuclear Power Plant. Our model considered oceanic transport by three dimensional ocean current, adsorption with large particulate matter (LPM), sedimentation and resuspension. The simulation well reproduced the spatial characteristics of sea surface concentration and sediment surface concentration of Cs-137 off Miyagi, Fukushima, and Ibaraki Prefectures during May-December 2011. The simulated results indicated that the adsorption-sedimentation of Cs-137 significantly occurred during strong wind events because the large amount of LPM was transported to upward layer by resuspension and vertical mixing. (author)
14. 2012 U.S. Geological Survey Topographic Lidar: Northeast Atlantic Coast Post-Hurricane Sandy
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — Binary point-cloud data were produced for a portion of the New York, Delaware, Maryland, Virginia, and North Carolina coastlines, post-Hurricane Sandy (Sandy was an...
15. Study of elementary absorption in the marine sediments of the North Atlantic ocean deeps
International Nuclear Information System (INIS)
Rancon, D.; Guegueniat, P.
1984-01-01
We have studied the retention of actinide elements (Np, Pu, Am) and of Cs in the sediments of the ocean deeps around Cap-Vert. Plutonium: retention increases with temperatures of 4 to 30 0 C, then stays constant from 30 to 80 0 C. Desorption is slow. Americium: absorption is very strong at any temperature. Measurements of a wide variety of sediments show that retention is not affected by facies (including carbonated sediments). Neptunium: retention is more or less constant between 4 and 15 0 C, and distinctly higher at 30-50 0 C. It is reversible. Caesium: absorption decreases slightly from 4 to 30 0 C, but increases rapidly at 50 to 80 0 C. At the lowest temperatures it is reversible, but it appears to be irreversible at 50 0 . Cs absorption is subject to ponderal concentration. With equal amounts of activity, retention of Cs-135 is weaker than that of Cs-137: likewise the addition of the stable isotope causes in the amounts of Kd in Cs-137. Finally, this paper presents preliminary results showing the natural metallic element content of the sediments
16. Earth system feedback statistically extracted from the Indian Ocean deep-sea sediments recording Eocene hyperthermals.
Science.gov (United States)
Yasukawa, Kazutaka; Nakamura, Kentaro; Fujinaga, Koichiro; Ikehara, Minoru; Kato, Yasuhiro
2017-09-12
Multiple transient global warming events occurred during the early Palaeogene. Although these events, called hyperthermals, have been reported from around the globe, geologic records for the Indian Ocean are limited. In addition, the recovery processes from relatively modest hyperthermals are less constrained than those from the severest and well-studied hothouse called the Palaeocene-Eocene Thermal Maximum. In this study, we constructed a new and high-resolution geochemical dataset of deep-sea sediments clearly recording multiple Eocene hyperthermals in the Indian Ocean. We then statistically analysed the high-dimensional data matrix and extracted independent components corresponding to the biogeochemical responses to the hyperthermals. The productivity feedback commonly controls and efficiently sequesters the excess carbon in the recovery phases of the hyperthermals via an enhanced biological pump, regardless of the magnitude of the events. Meanwhile, this negative feedback is independent of nannoplankton assemblage changes generally recognised in relatively large environmental perturbations.
17. A feasibility study of the disposal of radioactive waste in deep ocean sediments by drilled emplacement
International Nuclear Information System (INIS)
Bury, M.R.C.
1983-08-01
This report describes the second phase of a study of the feasibility of disposal and isolation of high level radioactive waste in holes drilled deep into the sediments of the ocean. In this phase, work has concentrated on establishing the state of the art of the various operations and developing the design, in particular the drilling operation, the loading of flasks containing waste canisters from supply vessels onto the platform, the handling of radioactive waste on board, and its emplacement into predrilled holes. In addition, an outline design of the offshore platform has been prepared. (author)
18. Investigation of hurricane Ivan using the coupled ocean-atmosphere-wave-sediment transport (COAWST) model
Science.gov (United States)
Zambon, Joseph B.; He, Ruoying; Warner, John C.
2014-01-01
The coupled ocean–atmosphere–wave–sediment transport (COAWST) model is used to hindcast Hurricane Ivan (2004), an extremely intense tropical cyclone (TC) translating through the Gulf of Mexico. Sensitivity experiments with increasing complexity in ocean–atmosphere–wave coupled exchange processes are performed to assess the impacts of coupling on the predictions of the atmosphere, ocean, and wave environments during the occurrence of a TC. Modest improvement in track but significant improvement in intensity are found when using the fully atmosphere–ocean-wave coupled configuration versus uncoupled (e.g., standalone atmosphere, ocean, or wave) model simulations. Surface wave fields generated in the fully coupled configuration also demonstrates good agreement with in situ buoy measurements. Coupled and uncoupled model-simulated sea surface temperature (SST) fields are compared with both in situ and remote observations. Detailed heat budget analysis reveals that the mixed layer temperature cooling in the deep ocean (on the shelf) is caused primarily by advection (equally by advection and diffusion).
19. Metagenomic profiles of antibiotic resistance genes (ARGs) between human impacted estuary and deep ocean sediments.
Science.gov (United States)
Chen, Baowei; Yang, Ying; Liang, Ximei; Yu, Ke; Zhang, Tong; Li, Xiangdong
2013-11-19
Knowledge of the origins and dissemination of antibiotic resistance genes (ARGs) is essential for understanding modern resistomes in the environment. The mechanisms of the dissemination of ARGs can be revealed through comparative studies on the metagenomic profiling of ARGs between relatively pristine and human-impacted environments. The deep ocean bed of the South China Sea (SCS) is considered to be largely devoid of anthropogenic impacts, while the Pearl River Estuary (PRE) in south China has been highly impacted by intensive human activities. Commonly used antibiotics (sulfamethazine, norfloxacin, ofloxacin, tetracycline, and erythromycin) have been detected through chemical analysis in the PRE sediments, but not in the SCS sediments. In the relatively pristine SCS sediments, the most prevalent and abundant ARGs are those related to resistance to macrolides and polypeptides, with efflux pumps as the predominant mechanism. In the contaminated PRE sediments, the typical ARG profiles suggest a prevailing resistance to antibiotics commonly used in human health and animal farming (including sulfonamides, fluoroquinolones, and aminoglycosides), and higher diversity in both genotype and resistance mechanism than those in the SCS. In particular, antibiotic inactivation significantly contributed to the resistance to aminoglycosides, β-lactams, and macrolides observed in the PRE sediments. There was a significant correlation in the levels of abundance of ARGs and those of mobile genetic elements (including integrons and plasmids), which serve as carriers in the dissemination of ARGs in the aquatic environment. The metagenomic results from the current study support the view that ARGs naturally originate in pristine environments, while human activities accelerate the dissemination of ARGs so that microbes would be able to tolerate selective environmental stress in response to anthropogenic impacts.
20. An Ocean Sediment Core-Top Calibration of Foraminiferal (Cibicides) Stable Carbon Isotope Ratios
Science.gov (United States)
Schmittner, A.; Mix, A. C.; Lisiecki, L. E.; Peterson, C.; Mackensen, A.; Cartapanis, O. A.
2015-12-01
Stable carbon isotope ratios (δ13C) measured on calcium carbonate shells of benthic foraminifera (cibicides) from late Holocene sediments (δ13CCib) are compiled and compared with newly updated datasets of contemporary water-column δ13C observations of dissolved inorganic carbon (δ13CDIC) as the initial core-top calibration of the international Ocean Circulation and CarbonCycling (OC3) project. Using selection criteria based on the spatial distance between samples we find high correlation between δ13CCib and natural (pre-industrial) δ13CDIC, confirming earlier work. However, our analysis reveals systematic differences such as higher (lower) δ13CCib values in the Atlantic (Indian and Pacific) oceans. Regression analyses are impacted by anthropogenic carbon and suggest significant carbonate ion, temperature, and pressure effects, consistent with lab experiments with planktonic foraminifera and theory. The estimated standard error of core-top sediment data is generally σ ~= 0.25 ‰, whereas modern foram data from the South Atlantic indicate larger errors (σ ~= 0.4 ‰).
1. Spatially Resolving Ocean Color and Sediment Dispersion in River Plumes, Coastal Systems, and Continental Shelf Waters
Science.gov (United States)
Aurin, Dirk Alexander; Mannino, Antonio; Franz, Bryan
2013-01-01
Satellite remote sensing of ocean color in dynamic coastal, inland, and nearshorewaters is impeded by high variability in optical constituents, demands specialized atmospheric correction, and is limited by instrument sensitivity. To accurately detect dispersion of bio-optical properties, remote sensors require ample signal-to-noise ratio (SNR) to sense small variations in ocean color without saturating over bright pixels, an atmospheric correction that can accommodate significantwater-leaving radiance in the near infrared (NIR), and spatial and temporal resolution that coincides with the scales of variability in the environment. Several current and historic space-borne sensors have met these requirements with success in the open ocean, but are not optimized for highly red-reflective and heterogeneous waters such as those found near river outflows or in the presence of sediment resuspension. Here we apply analytical approaches for determining optimal spatial resolution, dominant spatial scales of variability ("patches"), and proportions of patch variability that can be resolved from four river plumes around the world between 2008 and 2011. An offshore region in the Sargasso Sea is analyzed for comparison. A method is presented for processing Moderate Resolution Imaging Spectroradiometer (MODIS) Aqua and Terra imagery including cloud detection, stray lightmasking, faulty detector avoidance, and dynamic aerosol correction using short-wave- and near-infrared wavebands in extremely turbid regions which pose distinct optical and technical challenges. Results showthat a pixel size of approx. 520 mor smaller is generally required to resolve spatial heterogeneity in ocean color and total suspended materials in river plumes. Optimal pixel size increases with distance from shore to approx. 630 m in nearshore regions, approx 750 m on the continental shelf, and approx. 1350 m in the open ocean. Greater than 90% of the optical variability within plume regions is resolvable with
2. Chemical composition of marine sediments in the Pacific Ocean from Sinaloa to Jalisco, Mexico
International Nuclear Information System (INIS)
Martinez, T.; Lartigue, J.; Ramos, A.; Navarrete, M.; Mulller, G.
2014-01-01
Marine sediments from Mexico's West coast in the Pacific Ocean from Sinaloa to Jalisco were analyzed by energy-dispersive X-ray fluorescence technique. Ten sediment samples were collected in May, 2010 between 55.5 and 1264 m water depth with a Reinneck type box nucleate sampler. Sediments were dried and fractioned by granulometry. Their physical and chemical properties were determined in laboratory by standard methods, pH, and conductivity. Concentration and distribution of K, Ca, Ti Mn, Fe, Cu, Zn, Ga, Pb, Br and Sr were analyzed. In order to determine the status of the elements, enrichment factors were calculated. Total, organic carbon and CaCO 3 were also determined. Scanning electron microscopy and X-ray diffraction show predominant groups of compounds. As quality-control method, Certified Reference Material was both processed and analyzed at even conditions. Enrichment factors for K, Ca, Ti, Mn Fe, Cu, Zn, Ga, Ni, and Sr show they are conservative elements having concentrations in the range of unpolluted sites giving a base data line for the sampling zone In spite of moderately enrichment factors -1 ) and enrichment factor show the influence of anthropogenic sources with values between lowest effect level and a third part of 250 μg g -1 value, which is considered to have severe effect levels for aquatic life. (author)
3. The Nicobar Fan and sediment provenance: preliminary results from IODP Expedition 362, NE Indian Ocean
Science.gov (United States)
Pickering, K. T.; Pouderoux, H.; Milliken, K. L.; Carter, A.; Chemale, F., Jr.; Kutterolf, S.; Mukoyoshi, H.; Backman, J.; McNeill, L. C.; Dugan, B.; Expedition 362 Scientists, I.
2017-12-01
IODP Expedition 362 (6 Aug-6 Oct 2016) was designed to drill the input materials of the north Sumatran subduction zone, part of the 5000 km long Sunda subduction system and to understand the origin of the Mw 9.2 earthquake and tsunami that devastated coastal communities around the Indian Ocean in 2004 linked to unexpectedly shallow seismogenic slip and a distinctive forearc prism structure (1,2,3). Two sites, U1480 and U1481 on the Indian oceanic plate 250 km SW of the subduction zone on the eastern flank of the Ninetyeast Ridge, were drilled, cored, and logged to a maximum depth of 1500 m below seafloor. The input materials of the north Sumatran subduction zone are a thick (up to 4-5 km) succession mainly of Bengal-Nicobar Fan siliciclastic sediments overlying a mainly pelagic/hemipelagic succession, with igneous and volcaniclastic material above oceanic basement. At Sites U1480 and U1481, above the igneous basement ( 60-70 Ma), the sedimentary succession comprises deep-marine tuffaceous deposits with igneous intrusions, overlain by pelagic deposits, including chalk, and a thick Nicobar Fan succession of sediment gravity-flow (SGF) deposits, mainly turbidites and muddy debrites. The Nicobar Fan deposits (estimated total volume of 9.2 x 106 km3: 3) represent >90% of the input section at the drill sites and many of the beds are rich in plant material. These beds are intercalated with calcareous clays. Sediment accumulation rates reached 10-40 cm/kyr in the late Miocene to Pliocene, but were much reduced since 1.6 Ma. The onset of Nicobar Fan deposition at the drill sites ( 9.5 Ma; 2) is much younger than was anticipated precruise ( 30-40 Ma), based on previous regional analyses of Bengal-Nicobar Fan history and presumptions of gradual fan progradation. Our preliminary results suggest that the Nicobar Fan was active between 1.6 and 9.5 Ma, and possibly since 30 Ma (3). The observed mineralogical assemblage of the SGF deposits and zircon age dating are consistent with
4. Hurricane Sandy: Rapid Response Imagery of the Surrounding Regions
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — The imagery posted on this site is of Hurricane Sandy. The aerial photography missions were conducted by the NOAA Remote Sensing Division. The images were acquired...
5. 2014 USGS CMGP Lidar: Sandy Restoration (Delaware and Maryland)
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — Geographic Extent: SANDY_Restoration_DE_MD_QL2 Area of Interest covers approximately 3.096 square miles. Lot #5 contains the full project area Dataset Description:...
6. 2014 USGS CMGP Lidar: Post Sandy (Long Island, NY)
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — TASK NAME: Long Island New York Sandy LIDAR lidar Data Acquisition and Processing Production Task USGS Contract No. G10PC00057 Task Order No. G14PD00296 Woolpert...
7. Increased terrestrial to ocean sediment and carbon fluxes in the northern Chesapeake Bay associated with twentieth century land alteration
Science.gov (United States)
Saenger, C.; Cronin, T. M.; Willard, D.; Halka, J.; Kerhin, R.
2008-01-01
We calculated Chesapeake Bay (CB) sediment and carbon fluxes before and after major anthropogenic land clearance using robust monitoring, modeling and sedimentary data. Four distinct fluxes in the estuarine system were considered including (1) the flux of eroded material from the watershed to streams, (2) the flux of suspended sediment at river fall lines, (3) the burial flux in tributary sediments, and (4) the burial flux in main CB sediments. The sedimentary maximum in Ambrosia (ragweed) pollen marked peak land clearance (~1900 a.d.). Rivers feeding CB had a total organic carbon (TOC)/total suspended solids of 0.24??0.12, and we used this observation to calculate TOC fluxes from sediment fluxes. Sediment and carbon fluxes increased by 138-269% across all four regions after land clearance. Our results demonstrate that sediment delivery to CB is subject to significant lags and that excess post-land clearance sediment loads have not reached the ocean. Post-land clearance increases in erosional flux from watersheds, and burial in estuaries are important processes that must be considered to calculate accurate global sediment and carbon budgets. ?? 2008 Coastal and Estuarine Research Federation.
8. Late Pleistocene sedimentation: A case study of the central Indian Ocean Basin
Digital Repository Service at National Institute of Oceanography (India)
Borole, D
-Sea Research 1, Vol 40, No 4, pp 761-775, 1993 0967-0637/93 $6 00 + 0 00 Printed m Great Britain © 1993 Pergamon Press Lid Late Pleistocene sedimentation: a case study of the central Indian Ocean Basin D. V. BOROLE* (Recetved 26 August 1988, in revised... 26 + 0 11 4 10 + 0.20 1 30 + 0 10 5 03 20-25 1 10 + 0.07 3 60 + 0.14 1 08 ___ 0 09 5 3 30-35 1 51 + 0.10 3.28 + 0 34 1.10 + 0.15 5 3 65-70 1.08 + 0 05 3 20 + 0.23 0 97 + 0.09 4 38 80-85 0 81 + 0 05 1 80 + 0.12 0 63 + 0 06 4 37 Conanued 766 D V... 9. Bacterial diversity and biogeography in deep-sea sediments of the South Atlantic Ocean DEFF Research Database (Denmark) Schauer, Regina; Bienhold, Christina; Ramette, Alban 2010-01-01 in 1051 sequences. Phylotypes affiliated with Gammaproteobacteria, Deltaproteobacteria and Acidobacteria were present in all three basins. The distribution of these shared phylotypes seemed to be influenced neither by the Walvis Ridge nor by different deep water masses, suggesting a high dispersal......Microbial biogeographic patterns in the deep sea depend on the ability of microorganisms to disperse. One possible limitation to microbial dispersal may be the Walvis Ridge that separates the Antarctic Lower Circumpolar Deep Water from the North Atlantic Deep Water. We examined bacterial...... communities in three basins of the eastern South Atlantic Ocean to determine diversity and biogeography of bacterial communities in deep-sea surface sediments. The analysis of 16S ribosomal RNA (rRNA) gene clone libraries in each basin revealed a high diversity, representing 521 phylotypes with 98% identity... 10. Driving forces and their contribution to the recent decrease in sediment flux to ocean of major rivers in China. Science.gov (United States) Li, Tong; Wang, Shuai; Liu, Yanxu; Fu, Bojie; Zhao, Wenwu 2018-09-01 Understanding the mechanisms behind land-ocean sediment transport processes is crucial, due to the resulting impacts on the sustainable management of water and soil resources. This study investigated temporal trends and historical phases of sediment flux delivered to the sea by nine major rivers in China, while also quantifying the contribution of key anthropogenic and natural driving forces. During the past six decades, sediment flux from these nine major rivers exhibited a statistically significant negative trend, decreasing from 1.92Gtyr -1 during 1954-1968 to 1.39Gtyr -1 , 0.861Gtyr -1 and 0.335Gtyr -1 during 1969-1985, 1986-1999 and 2000-2016, respectively. We used a recently developed Sediment Identity approach and found that the sharp decrease in sediment load observed across China was mainly (~95%) caused by a reduction in sediment concentration. Reservoir construction exerted the strongest influence on land-ocean sediment fluxes, while soil conservation measures represented a secondary driver. Before 1999, soil erosion was not controlled effectively in China and reservoirs, especially large ones, played a dominant role in reducing riverine sediments. After 1999, soil erosion has gradually been brought under control across China, so that conservation measures directly accounted for ~40% of the observed decrease in riverine sediments. With intensifying human activities, it is predicted that the total sediment flux delivered to the sea by the nine major rivers will continue to decrease in the coming decades, although at a slower rate, resulting in severe challenges for the sustainable management of drainage basins and river deltas. Copyright © 2018 Elsevier B.V. All rights reserved. 11. Micrometer- and nanometer-sized platinum group nuggets in micrometeorites from deep-sea sediments of the Indian Ocean Digital Repository Service at National Institute of Oceanography (India) Rudraswami, N.G.; Parashar, K.; ShyamPrasad, M. We examined 378 micrometeorites collected from deep-sea sediments of the Indian Ocean of which 175, 180, and 23 are I-type, S-type, and G-type, respectively. Of the 175 I-type spherules, 13 contained platinum group element nuggets (PGNs... 12. Late quaternary palaeo-oceanography and palaeo-climatology from sediment cores of the eastern Arctic Ocean International Nuclear Information System (INIS) Pagels, U.; Koehler, S. 1991-01-01 Box cores recovered along a N-S transect in the Eurasian Basin allow the establishment of a time scale for the Late Quaternary history of the Arctic Ocean, based on stable oxygen isotope stratigraphy and AMS 14 C dating of planktonic foraminifers (N. pachyderma I.c.). This high resolution stratigraphy, in combination with sedimentological investigations (e.g. coarse fraction analysis, carbonate content, productivity of foraminifers), was carried out to reconstruct the glacial and inter-glacial Arctic Ocean palaeo-environment The sediment cores, which can be correlated throughout the sampling area in the Eastern Arctic Ocean, were dated as representing oxygen isotope stages 1 to 4/5. The sedimentation rates varied between a few mm/ka in glacials and approximately one cm/ka during the Holocene. The sediments allow a detailed sedimentological description of the depositional regime and the palaeo-oceanography of the Eastern Arctic Ocean. Changing ratios of biogenic and lithogenic components in the sediments reflect variations in the oceanographic circulation pattern in the Eurasian Basin during the Late Quaternary. Carbonate content (1-9wt.%), productivity of foraminifers (high in interglacial, low in glacial stages) and the terrigenous components are in good correlation with glacial and inter-glacial climatic fluctuations 13. EAARL-B Coastal Topography--Eastern New Jersey, Hurricane Sandy, 2012: First Surface, Pre-Sandy Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — ASCII xyz and binary point-cloud data, as well as a digital elevation model (DEM) of a portion of the New Jersey coastline, pre- and post-Hurricane Sandy (October... 14. Biogenic silica and organic carbon in sediments from the Pacific sector of the Southern Ocean International Nuclear Information System (INIS) Giglio, F.; Langone, L.; Morigi, C.; Frignani, M.; Ravaioli, M. 2002-01-01 Four cores, collected during the 1995/96 Italian Antarctic cruise and located north and south of the Polar Front, provided both qualitative and quantitative information about changes of the sediment settings driven by climate changes. Biogenic silica and organic carbon flux variations and sedimentological analyses allow us to make inferences about the fluctuation of the Polar Front during the last climate cycles: the records of our cores Anta96-1 and Anta96-16 account for fluctuations of the Polar Front of at least 5 degrees with respect to the present position, with a concomitant movement of the Marginal Ice Zone. The very low accumulation rates at the study sites are probably due to the scarce availability of micronutrients. In the area south of the Polar Front, sediment accumulation, after a decrease, appears constant during the last 250,000 yr. A subdivision in glacial/interglacial stages has been proposed, which permits the identification of the warm stage 11, which is particularly important in the Southern Ocean. (author). 13 refs., 5 figs 15. Gas production potential of disperse low-saturation hydrate accumulations in oceanic sediments International Nuclear Information System (INIS) Moridis, George J.; Sloan, E. Dendy 2007-01-01 In this paper, we evaluate the gas production potential of disperse, low-saturation (S H H hydrate-bearing sediments subject to depressurization-induced dissociation over a 10-year production period. We investigate the sensitivity of items (a)-(c) to the following hydraulic properties, reservoir conditions, and operational parameters: intrinsic permeability, porosity, pressure, temperature, hydrate saturation, and constant pressure at which the production well is kept. The results of this study indicate that, despite wide variations in the aforementioned parameters (covering the entire spectrum of such deposits), gas production is very limited, never exceeding a few thousand cubic meters of gas during the 10-year production period. Such low production volumes are orders of magnitude below commonly accepted standards of economic viability, and are further burdened with very unfavorable gas-to-water ratios. The unequivocal conclusion from this study is that disperse, low-S H hydrate accumulations in oceanic sediments are not promising targets for gas production by means of depressurization-induced dissociation, and resources for early hydrate exploitation should be focused elsewhere 16. LOSCAR: Long-term Ocean-atmosphere-Sediment CArbon cycle Reservoir Model v2.0.4 Directory of Open Access Journals (Sweden) R. E. Zeebe 2012-01-01 Full Text Available The LOSCAR model is designed to efficiently compute the partitioning of carbon between ocean, atmosphere, and sediments on time scales ranging from centuries to millions of years. While a variety of computationally inexpensive carbon cycle models are already available, many are missing a critical sediment component, which is indispensable for long-term integrations. One of LOSCAR's strengths is the coupling of ocean-atmosphere routines to a computationally efficient sediment module. This allows, for instance, adequate computation of CaCO3 dissolution, calcite compensation, and long-term carbon cycle fluxes, including weathering of carbonate and silicate rocks. The ocean component includes various biogeochemical tracers such as total carbon, alkalinity, phosphate, oxygen, and stable carbon isotopes. LOSCAR's configuration of ocean geometry is flexible and allows for easy switching between modern and paleo-versions. We have previously published applications of the model tackling future projections of ocean chemistry and weathering, pCO2 sensitivity to carbon cycle perturbations throughout the Cenozoic, and carbon/calcium cycling during the Paleocene-Eocene Thermal Maximum. The focus of the present contribution is the detailed description of the model including numerical architecture, processes and parameterizations, tuning, and examples of input and output. Typical CPU integration times of LOSCAR are of order seconds for several thousand model years on current standard desktop machines. The LOSCAR source code in C can be obtained from the author by sending a request to [email protected]. 17. Mechanisms Leading to Co-Existence of Gas Hydrate in Ocean Sediments [Part 1 of 2 Energy Technology Data Exchange (ETDEWEB) Bryant, Steven; Juanes, Ruben 2011-12-31 In this project we have sought to explain the co-existence of gas and hydrate phases in sediments within the gas hydrate stability zone. We have focused on the gas/brine interface at the scale of individual grains in the sediment. The capillary forces associated with a gas/brine interface play a dominant role in many processes that occur in the pores of sediments and sedimentary rocks. The mechanical forces associated with the same interface can lead to fracture initiation and propagation in hydrate-bearing sediments. Thus the unifying theme of the research reported here is that pore scale phenomena are key to understanding large scale phenomena in hydrate-bearing sediments whenever a free gas phase is present. Our analysis of pore-scale phenomena in this project has delineated three regimes that govern processes in which the gas phase pressure is increasing: fracturing, capillary fingering and viscous fingering. These regimes are characterized by different morphology of the region invaded by the gas. On the other hand when the gas phase pressure is decreasing, the corresponding regimes are capillary fingering and compaction. In this project, we studied all these regimes except compaction. Many processes of interest in hydrate-bearing sediments can be better understood when placed in the context of the appropriate regime. For example, hydrate formation in sub-permafrost sediments falls in the capillary fingering regime, whereas gas invasion into ocean sediments is likely to fall into the fracturing regime. Our research provides insight into the mechanisms by which gas reservoirs are converted to hydrate as the base of the gas hydrate stability zone descends through the reservoir. If the reservoir was no longer being charged, then variation in grain size distribution within the reservoir explain hydrate saturation profiles such as that at Mt. Elbert, where sand-rich intervals containing little hydrate are interspersed between intervals containing large hydrate 18. Mechanisms Leading to Co-Existence of Gas Hydrate in Ocean Sediments [Part 2 of 2 Energy Technology Data Exchange (ETDEWEB) Bryant, Steven; Juanes, Ruben 2011-12-31 In this project we have sought to explain the co-existence of gas and hydrate phases in sediments within the gas hydrate stability zone. We have focused on the gas/brine interface at the scale of individual grains in the sediment. The capillary forces associated with a gas/brine interface play a dominant role in many processes that occur in the pores of sediments and sedimentary rocks. The mechanical forces associated with the same interface can lead to fracture initiation and propagation in hydrate-bearing sediments. Thus the unifying theme of the research reported here is that pore scale phenomena are key to understanding large scale phenomena in hydrate-bearing sediments whenever a free gas phase is present. Our analysis of pore-scale phenomena in this project has delineated three regimes that govern processes in which the gas phase pressure is increasing: fracturing, capillary fingering and viscous fingering. These regimes are characterized by different morphology of the region invaded by the gas. On the other hand when the gas phase pressure is decreasing, the corresponding regimes are capillary fingering and compaction. In this project, we studied all these regimes except compaction. Many processes of interest in hydrate-bearing sediments can be better understood when placed in the context of the appropriate regime. For example, hydrate formation in sub-permafrost sediments falls in the capillary fingering regime, whereas gas invasion into ocean sediments is likely to fall into the fracturing regime. Our research provides insight into the mechanisms by which gas reservoirs are converted to hydrate as the base of the gas hydrate stability zone descends through the reservoir. If the reservoir was no longer being charged, then variation in grain size distribution within the reservoir explain hydrate saturation profiles such as that at Mt. Elbert, where sand-rich intervals containing little hydrate are interspersed between intervals containing large hydrate 19. Assessment of 238Pu and 239+240Pu, in marine sediments of the oceans Atlantic and Pacific of Guatemala International Nuclear Information System (INIS) Mendez Ochaita, L. 2000-01-01 In this investigation samples of marine sediments were taken from 14 places representatives of the oceans coast of Guatemala. For the assesment of 238 Pu and 239+240 Pu in sediments a radiochemical method was used to mineralize sediments and by ionic interchange it was separated from other elements, after that an electrodeposition of plutonium was made in metallic discs. The radioactivity of plutonium was measured by alpha spectrometry system and the alpha spectrums were obtained. The levels of plutonium are not higher than other countries that shown contamination. The contamination of isotope of 239+240 Pu is higher than 238 Pu and the contamination by two isotopes of plutonium is higher in the Atlantic than the Pacific ocean 20. Distribution and sources of polycyclic aromatic hydrocarbons in surface sediments from the Bering Sea and western Arctic Ocean. Science.gov (United States) Zhao, Mengwei; Wang, Weiguo; Liu, Yanguang; Dong, Linsen; Jiao, Liping; Hu, Limin; Fan, Dejiang 2016-03-15 To analyze the distribution and sources of polycyclic aromatic hydrocarbons (PAHs) and evaluate their potential ecological risks, the concentrations of 16 PAHs were measured in 43 surface sediment samples from the Bering Sea and western Arctic Ocean. Total PAH (tPAH) concentrations ranged from 36.95 to 150.21 ng/g (dry weight). In descending order, the surface sediment tPAH concentrations were as follows: Canada Basin>northern Chukchi Sea>Chukchi Basin>southern Chukchi Sea>Aleutian Basin>Makarov Basin>Bering Sea shelf. The Bering Sea and western Arctic Ocean mainly received PAHs of pyrogenic origin due to pollution caused by the incomplete combustion of fossil fuels. The concentrations of PAHs in the sediments of the study areas did not exceed effects range low (ERL) values. Copyright © 2016 Elsevier Ltd. All rights reserved. 1. Processing of 13C-labelled phytoplankton in a fine-grained sandy-shelf sediment (North Sea): relative importance of different macrofauna species DEFF Research Database (Denmark) Kamp, Anja; Witte, Ursula 2005-01-01 by additional laboratory experiments on the role of the dominant macrofauna organism, the bivalve Fabulina fabula (Bivalvia: Tellinidae), for particulate organic matter subduction to deeper sediment layers. The specific uptake of algal 13C by macrofauna organisms was visible after 12 h and constantly increased...... carbon processing. Predatory macrofauna organisms like Nephtys spp. (Polychaeta: Nephtyidae) also quickly became labelled. The rapid subduction of fresh organic matter by F. fabula down to ca. 4 to 7 cm sediment depth could be demonstrated, and it is suggested that entrainment by macrofauna in this fine... 2. Bacterial Production and Enzymatic Activities in Deep-Sea Sediments of the Pacific Ocean: Biogeochemical Implications of Different Temperature Constraints Science.gov (United States) Danovaro, R.; Corinaldesi, C.; dell'Anno, A. 2002-12-01 The deep-sea bed, acting as the ultimate sink for organic material derived from the upper oceans primary production, is now assumed to play a key role in biogeochemical cycling of organic matter on global scale. Early diagenesis of organic matter in marine sediments is dependent upon biological processes (largely mediated by bacterial activity) and by molecular diffusion. Organic matter reaching the sea floor by sedimentation is subjected to complex biogeochemical transformations that make organic matter largely unsuitable for direct utilization by benthic heterotrophs. Extracellular enzymatic activities in the sediment is generally recognized as the key step in the degradation and utilization of organic polymers by bacteria and a key role in biopolymeric carbon mobilization is played by aminopeptidase, alkaline phosphatase and glucosidase activities. In the present study we investigated bacterial density, bacterial C production and exo-enzymatic activities (aminopeptidase, glucosidase and phosphatase activity) in deep-sea sediments of the Pacific Ocean in relation with the biochemical composition of sediment organic matter (proteins, carbohydrates and lipids), in order to gather information on organic matter cycling and diagenesis. Benthic viral abundance was also measured to investigate the potential role of viruses on microbial loop functioning. Sediment samples were collected at eight stations (depth ranging from 2070-3100 m) along two transects located at the opposite side (north and south) of ocean seismic ridge Juan Fernandez (along latitudes 33° 20' - 33° 40'), constituted by the submerged vulcanoes, which connects the Chilean coasts to Rapa Nui Island. Since the northern and southern sides of this ridge apparently displayed small but significant differences in deep-sea temperature (related to the general ocean circulation), this sampling strategy allowed also investigating the role of different temperature constraints on bacterial activity and 3. Nondestructive X-Ray Computed Tomography Analysis of Sediment Cores: A Case Study from the Arctic Ocean Science.gov (United States) Oti, E.; Polyak, L. V.; Cook, A.; Dipre, G. 2014-12-01 Investigation of marine sediment records can help elucidate recent changes in the Arctic Ocean circulation and sea ice conditions. We examine sediment cores from the western Arctic Ocean, representing Late to Early Quaternary age (potentially up to 1 Ma). Previous studies of Arctic sediment cores indicate that interglacial/interstadial periods with relatively high sea levels and reduced ice cover are characterized by vigorous bioturbation, while glacial intervals have little to no bioturbation. Traditional methods for studying bioturbation require physical dissection of the cores, effectively destroying them. To treat this limitation, we evaluate archival sections of the cores using an X-ray Computed Tomography (XCT) scanner, which noninvasively images the sediment cores in three dimensions. The scanner produces density sensitive images suitable for quantitative analysis and for identification of bioturbation based on size, shape, and orientation. We use image processing software to isolate burrows from surrounding sediment, reconstruct them three-dimensionally, and then calculate their surface areas, volumes, and densities. Preliminary analysis of a core extending to the early Quaternary shows that bioturbation ranges from 0 to approximately 20% of the core's volume. In future research, we will quantitatively define the relationship between bioturbation activity and glacial regimes. XCT examination of bioturbation and other sedimentary features has the potential to shed light on paleoceanographic conditions such as sedimentation patterns and food flux. XCT is an alternative, underexplored investigation method that bears implications not only for illustrating paleoclimate variations but also for preserving cores for future, more advanced technologies. 4. Composition, production, and loss of carbohydrates in subtropical shallow subtidal sandy sediments: Rapid processing and long-term retention revealed by 13C-labeling NARCIS (Netherlands) Oakes, J.M.; Eyre, B.D.; Middelburg, J.J.; Boschker, H.T.S. 2010-01-01 The composition and production of carbohydrates (mannose, rhamnose, fucose, galactose, glucose, and xylose) and their transfer among sediment compartments (microphytobenthos [MPB], bacteria, and detritus) was investigated through in situ labeling with 13C-bicarbonate. After 60 h, 13C was found in 5. Sediment Transport and Infilling of a Borrow Pit on an Energetic Sandy Ebb Tidal Delta Offshore of Hilton Head Island, South Carolina Science.gov (United States) Wren, A.; Xu, K.; Ma, Y.; Sanger, D.; Van Dolah, R. 2014-12-01 Bottom-mounted instrumentation was deployed at two sites on an ebb tidal delta to measure hydrodynamics, sediment transport, and seabed elevation. One site ('borrow site') was 2 km offshore and used as a dredging site for beach nourishment of nearby Hilton Head Island in South Carolina, and the other site ('reference site') was 10 km offshore and not directly impacted by the dredging. In-situ time-series data were collected during two periods after the dredging: March 15 - June 12, 2012('spring') and August 18 - November 18, 2012 ('fall'). At the reference site directional wave spectra and upper water column current velocities were measured, as well as high-resolution current velocity profiles and suspended sediment concentration profiles in the Bottom Boundary Layer (BBL). Seabed elevation and small-scale seabed changes were also measured. At the borrow site seabed elevation and near-bed wave and current velocities were collected using an Acoustic Doppler Velocimeter. Throughout both deployments bottom wave orbital velocities ranged from 0 - 110 m/s at the reference site. Wave orbital velocities were much lower at the borrow site ranging from 10-20 cm/s, as wave energy was dissipated on the extensive and rough sand banks before reaching the borrow site. Suspended sediment concentrations increased throughout the BBL when orbital velocities increased to approximately 20 cm/s. Sediment grain size and critical shear stresses were similar at both sites, therefore, re-suspension due to waves was less frequent at the borrow site. However, sediment concentrations were highly correlated with the tidal cycle at both sites. Semidiurnal tidal currents were similar at the two sites, typically ranging from 0 - 50 cm/s in the BBL. Maximum currents exceeded the critical shear stress and measured suspended sediment concentrations increased during the first hours of the tidal cycle when the tide switched to flood tide. Results indicate waves contributed more to sediment mobility at 6. U- and Th-series nuclides in settling particles: implications to sediment transport through surface waters and interior ocean International Nuclear Information System (INIS) Sarin, M.M. 2012-01-01 The Bay of Bengal is a unique ocean basin receiving large quantities of fresh water and sediment supply from several rivers draining the Indian subcontinent. The annual flux of suspended sediments discharged into the Bay of Bengal is one billion tons, about one-tenth of the global sediment discharge into the ocean. The water and sediment discharge to the Bay, show significant seasonal variation, with maximum transport coinciding with the SW-monsoon (July-September). Earlier studies on the distribution of clay minerals in sediments have led to the suggestion that the sediments of the western Bengal Fan are mainly derived from the Peninsular rivers, whereas rest of the Fan sediments is influenced by the Himalayan rivers. Settling fluxes of particulate matter through the water column of the Bay of Bengal show seasonal trends resulting from monsoon enhanced sediment supply via rivers and biological processes in the water column. It is, thus, important to understand the influence of the seasonally varying particle fluxes on the solute-particle interactions and chemical scavenging processes in the surface and deep waters of the Bay of Bengal. In this context, measurements of U- and Th-series nuclides in the settling particles are most relevant. The radionuclide fluxes ( 230 Th, 228 Th and 210 Pb) in the settling particles provide insight into the role of their removal by vertical particle flux and/or lateral transport (removal at the ocean boundaries). A study carried out in the Northern Bay of Bengal documents that the authigenic flux of 230 Th, as measured in sediment trap samples from deep waters, is balanced by its production in the overhead water column. The sediment mass flux, Al and 228 Th fluxes are similar in the settling particles through shallow and deep waters, suggesting predominant removal by vertical particle flux in the North Bay of Bengal. In the Central Bay, particulate mass, Al and 228 Th fluxes are higher in the trap material from deep waters relative 7. Late Cretaceous and Cenozoic seafloor and oceanic basement roughness: Spreading rate, crustal age and sediment thickness correlations Science.gov (United States) Bird, Robert T.; Pockalny, Robert A. 1994-05-01 Single-channel seismic data from the South Australian Basin and Argentine Basin, and bathymetry data from the flanks of the Mid-Atlantic Ridge, East Pacific Rise and Southwest Indian Ridge are analysed to determine the root-mean-square (RMS) roughness of the seafloor and oceanic basement created at seafloor spreading rates ranging from 3 to 80 km/Ma (half-rate). For these data, crustal ages range from near zero to 85 Ma and sediment thicknesses range from near zero to over 2 km. Our results are consistent with a negative correlation of basement roughness and spreading rate where roughness decreases dramatically through the slow-spreading regime (oceanic basement roughness and spreading rate appears to have existed since the late Cretaceous for slow and intermediate spreading rates, suggesting that the fundamental processes creating abyssal hill topography may have remained the same for this time period. Basement roughness does not appear to decrease (smooth) with increasing crustal age, and therefore off-ridge degradation of abyssal hill topography by mass wasting is not detected by our data. Seismic data reveal that sediment thickness increases with increasing crustal age in the South Australian Basin and Argentine Basin, but not monotonically and with significant regional variation. We show that minor accumulations of sediment can affect roughness significantly. Average sediment accumulations of less that 50 m (for our 100 km long sample seismic profiles and half-spreading rates ocean ridges. 8. Systems analysis approach to the disposal of high-level waste in deep ocean sediments International Nuclear Information System (INIS) Marsily, G. de; Hill, M.D.; Murray, C.N.; Talbert, D.M.; Van Dorp, F.; Webb, G.A.M. 1980-01-01 Among the different options being studied for disposal of high-level solidified waste, increasing attention is being paid to that of emplacement of glasses incorporating the radioactivity in deep oceanic sediments. This option has the advantage that the areas of the oceans under investigation appear to be relatively unproductive biologically, are relatively free from cataclysmic events, and are areas in which the natural processes are slow. Thus the environment is stable and predictable so that a number of barriers to the release and dispersion of radioactivity can be defined. Task Groups set up in the framework of the International Seabed Working Group have been studying many aspects of this option since 1976. In order that the various parts of the problem can be assessed within an integrated framework, the methods of systems analysis have been applied. In this paper the Systems Analysis Task Group members report the development of an overall system model. This will be used in an iterative process in which a preliminary analysis, together with a sensitivity analysis, identifies the parameters and data of most importance. The work of the other task groups will then be focussed on these parameters and data requirements so that improved results can be fed back into an improved overall systems model. The major requirements for the development of a preliminary overall systems model are that the problem should be separated into identified elements and that the interfaces between the elements should be clearly defined. The model evolved is deterministic and defines the problem elements needed to estimate doses to man 9. Assessment of 210Po in agricultural soils and marine sediments of the Atlantic and Pacific oceans of Guatemala International Nuclear Information System (INIS) Garcia Vela, A.G. 1999-01-01 A radiochemical method consisting of 210 Polonium extraction was made to measure radioactivity in samples of soil and marine sediments of Atlantic and Pacific Ocean. The solution of polonium it was treated to obtain the deposition of the metal over a zinc disc and was measured by alpha espectrometry system based on Planar Ion Planted Silice (PIPS) system. The concern about cultivated soils its consuption products from sea and soil come from these sources. The results shows that activity of 210 Polonium in agricultural soils and marine sediments are below of ALI recommended by international standards 10. Low level spectrometry of Fe-Mn concretions from the Pacific Ocean bed and of Pierre St. Martin cave sediments International Nuclear Information System (INIS) Dimchev, T.; Prodanov, Ya. 1977-01-01 Results of the nondestructive gamma-spectrometric analysis of the Fe-Mn nodules from the Pacific Ocean in the neighbourhood of the Raratonga Isles and Fiji Isles are reported. The cave sediments from the San Martin Cave in the Pyrenees and from other caves were also analyzed. The nondestructive method was used for analyzing samples using a low background scintillation gamma spectrometer. Results obtained for geological samples, soils, sediments, etc. are given for comparison. Statistical methods were applied for the quantitative analysis of the gamma spectra obtained. (author) 11. 2014 NOAA Ortho-rectified Mosaic of Hurricane Sandy Coastal Impact Area Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains ortho-rectified mosaic tiles at 0.35m GSD created for NOAA Integrated Ocean and Coastal Mapping (IOCM) initiative in Hurricane Sandy coastal... 12. Crystal-chemical characteristics of nontronites from bottom sediments of Pacific ocean International Nuclear Information System (INIS) Palchik, N. A.; Moroz, T. N.; Grigorieva, T. N.; Nikandrova, N. K.; Miroshnichenko, L. V. 2017-01-01 A crystal-chemical analysis of the nontronite samples formed in deep-water sediments of the underwater Juan-de-Fuca ridge in the Pacific ocean has been performed using powder X-ray diffraction, IR spectroscopy, and Mössbauer spectroscopy. A comparison with the previously investigated nontronites from different regions of the Sea of Okhotsk showed that the structural features of these formations are due to the difference in the physicochemical parameters of their crystallization. The values of the basal interplanar spacing d_0_0_1 (within 11–13 Å) in the samples analyzed are determined by the degree of hydration and cation filling of the interlayer space, while the differences in the IR spectra are due to isomorphic substitutions in the structure. The character of cation distribution and the nature and concentration of stacking faults in nontronite structures are determined. The differences in the composition, structure, and properties of nontronites of different origin are confirmed by theoretical calculations of their structural parameters. 13. Crystal-chemical characteristics of nontronites from bottom sediments of Pacific ocean Energy Technology Data Exchange (ETDEWEB) Palchik, N. A., E-mail: [email protected]; Moroz, T. N.; Grigorieva, T. N. [Russian Academy of Sciences, Sobolev Institute of Geology and Mineralogy, Siberian Branch (Russian Federation); Nikandrova, N. K. [Russian Academy of Sciences, Institute of Mineralogy, Ural Branch (Russian Federation); Miroshnichenko, L. V. [Russian Academy of Sciences, Sobolev Institute of Geology and Mineralogy, Siberian Branch (Russian Federation) 2017-01-15 A crystal-chemical analysis of the nontronite samples formed in deep-water sediments of the underwater Juan-de-Fuca ridge in the Pacific ocean has been performed using powder X-ray diffraction, IR spectroscopy, and Mössbauer spectroscopy. A comparison with the previously investigated nontronites from different regions of the Sea of Okhotsk showed that the structural features of these formations are due to the difference in the physicochemical parameters of their crystallization. The values of the basal interplanar spacing d{sub 001} (within 11–13 Å) in the samples analyzed are determined by the degree of hydration and cation filling of the interlayer space, while the differences in the IR spectra are due to isomorphic substitutions in the structure. The character of cation distribution and the nature and concentration of stacking faults in nontronite structures are determined. The differences in the composition, structure, and properties of nontronites of different origin are confirmed by theoretical calculations of their structural parameters. 14. Application of systems analysis to the disposal of high level waste in deep ocean sediments International Nuclear Information System (INIS) De Marsily, G.; Dorp, F. van 1982-01-01 Emplacement in deep ocean sediments is one of the disposal options being considered for solidified high level radioactive waste. Task groups set up within the framework of the NEA Seabed Working Group have been studying many aspects of this option since 1976. The methods of systems analysis have been applied to enable the various parts of the problem to be assessed within an integrated framework. This paper describes the progress made by the Systems Analysis Task Group towards the development of an overall system model. The Task Group began by separating the problem into elements and defining the interfaces between these elements. A simple overall system model was then developed and used in both a preliminary assessment and a sensitivity analysis to identify the most important parameters. These preliminary analyses used a very simple model of the overall system and therefore the results cannot be used to draw any conclusions as to the acceptability of the sub-seabed disposal option. However they served to show the utility of the systems analysis method. The work of the other task groups will focus on the important parameters so that improved results can be fed back into an improved system model. Subsequent iterations will eventually provide an input to an acceptability decision. (Auth.) 15. The record of India-Asia collision preserved in Tethyan ocean basin sediments. Science.gov (United States) Najman, Yani; Jenks, Dan; Godin, Laurent; Boudagher-Fadel, Marcelle; Bown, Paul; Horstwood, Matt; Garzanti, Eduardo; Bracialli, Laura; Millar, Ian 2015-04-01 The timing of India-Asia collision is critical to the understanding of crustal deformation processes, since, for example, it impacts on calculations regarding the amount of convergence that needs to be accommodated by various mechanisms. In this research we use sediments originally deposited in the Tethyan ocean basin and now preserved in the Himalayan orogeny to constrain the timing of collision. In the NW Himalaya, a number of workers have proposed a ca 55-50 Ma age for collision along the Indus suture zone which separates India from the Kohistan-Ladakh Intraoceanic Island arc (KLA) to the north. This is based on a number of factors including the age of youngest marine sediments in the Indus suture (e.g. Green et al. 2008), age of eclogites indicative of onset of Indian continental subduction (e.g. de Sigoyer et al. 2000), and first evidence of detritus from north of the suture zone deposited on the Indian plate (e.g. Clift et al. 2002). Such evidence can be interpreted as documenting the age of India-Asia collision if one takes the KLA to have collided with the Asian plate prior to its collision with India (e.g. Petterson 2010 and refs therein). However, an increasing number of workers propose that the KLA collided with Asia subsequent to its earlier collision with India, dated variously at 85 Ma (Chatterjee et al. 2013), 61 Ma (Khan et al. 2009) and 50 Ma (Bouilhol et al. 2013). This, plus the questioning of earlier provenance work (Clift et al. 2002) regarding the validity of their data for constraining timing of earliest arrival of material north of the suture deposited on the Indian plate (Henderson et al. 2011) suggests that the time is right for a reappraisal of this topic. We use a provenance-based approach here, using combined U-Pb and Hf on detrital zircons from Tethyan ocean basin sediments, along with petrography and biostratigraphy, to identify first arrival of material from north of the Indian plate to arrive on the Indian continent, to constrain 16. Bacterial profiling of Saharan dust deposition in the Atlantic Ocean using sediment trap moorings – year one results Science.gov (United States) Munday, Chris; Brummer, Geert-Jan; van der Does, Michelle; Korte, Laura; Stuut, Jan-Berend 2015-04-01 Large quantities of dust are transported from the Sahara Desert across the Atlantic Ocean towards the Caribbean each year, with a large portion of it deposited in the ocean. This dust brings an array of minerals, nutrients and organic matter, both living and dead. This input potentially fertilizes phytoplankton growth, with resulting knock-on effects throughout the food chain. The input of terrestrial microbial life may also have an impact on the marine microbial community. The current multi-year project consists of a transect of floating dust collectors and sub-surface sediment traps placed at 12°N across the Atlantic Ocean. Sediment traps are located 1200m and 3500m below the sea surface and all are synchronized to collect samples for a period of two weeks. The aim is to understand the links between dust input and the bacterial community and how this relates to ocean productivity and the carbon cycle. The first set of sediment trap samples were recovered using the RV Pelagia in November 2013 with promising results. Results from 7 sediment traps (three at 1200m and four at 3500m) were obtained. In general, the total mass flux decreased as distance from the source increased and the upper traps generally held more material than those at 3500m. Denaturing Gradient Gel Electrophoresis (DGGE) was used as a screening technique, revealing highly varied profiles, with the upper (1200m) traps generally showing more variation throughout the year. Several samples have been submitted for high throughput DNA sequencing which will identify the variations in these samples. 17. A comparison of microbial communities in deep-sea polymetallic nodules and the surrounding sediments in the Pacific Ocean Science.gov (United States) Wu, Yue-Hong; Liao, Li; Wang, Chun-Sheng; Ma, Wei-Lin; Meng, Fan-Xu; Wu, Min; Xu, Xue-Wei 2013-09-01 Deep-sea polymetallic nodules, rich in metals such as Fe, Mn, and Ni, are potential resources for future exploitation. Early culturing and microscopy studies suggest that polymetallic nodules are at least partially biogenic. To understand the microbial communities in this environment, we compared microbial community composition and diversity inside nodules and in the surrounding sediments. Three sampling sites in the Pacific Ocean containing polymetallic nodules were used for culture-independent investigations of microbial diversity. A total of 1013 near full-length bacterial 16S rRNA gene sequences and 640 archaeal 16S rRNA gene sequences with ~650 bp from nodules and the surrounding sediments were analyzed. Bacteria showed higher diversity than archaea. Interestingly, sediments contained more diverse bacterial communities than nodules, while the opposite was detected for archaea. Bacterial communities tend to be mostly unique to sediments or nodules, with only 13.3% of sequences shared. The most abundant bacterial groups detected only in nodules were Pseudoalteromonas and Alteromonas, which were predicted to play a role in building matrix outside cells to induce or control mineralization. However, archaeal communities were mostly shared between sediments and nodules, including the most abundant OTU containing 290 sequences from marine group I Thaumarchaeota. PcoA analysis indicated that microhabitat (i.e., nodule or sediment) seemed to be a major factor influencing microbial community composition, rather than sampling locations or distances between locations. 18. Metal release from contaminated coastal sediments under changing pH conditions: Implications for metal mobilization in acidified oceans. Science.gov (United States) Wang, Zaosheng; Wang, Yushao; Zhao, Peihong; Chen, Liuqin; Yan, Changzhou; Yan, Yijun; Chi, Qiaoqiao 2015-12-30 To investigate the impacts and processes of CO2-induced acidification on metal mobilization, laboratory-scale experiments were performed, simulating the scenarios where carbon dioxide was injected into sediment-seawater layers inside non-pressurized chambers. Coastal sediments were sampled from two sites with different contamination levels and subjected to pre-determined pH conditions. Sediment samples and overlying water were collected for metal analysis after 10-days. The results indicated that CO2-induced ocean acidification would provoke increased metal mobilization causing adverse side-effects on water quality. The mobility of metals from sediment to the overlying seawater was correlated with the reduction in pH. Results of sequential extractions of sediments illustrated that exchangeable metal forms were the dominant source of mobile metals. Collectively, our data revealed that high metal concentrations in overlying seawater released from contaminated sediments under acidic conditions may strengthen the existing contamination gradients in Maluan Bay and represent a potential risk to ecosystem health in coastal environments. Copyright © 2015 Elsevier Ltd. All rights reserved. 19. The concentration of "1"3"7Cs and organic carbon on sediment at Rat island in Indian ocean International Nuclear Information System (INIS) Muslim; Reza Agung Arjana; Wahyu Retno Prihatiningsih 2016-01-01 Rat Island is one of the islands in Indonesia, located in the Indian Ocean, about 10 kilometers west of Bengkulu, which has a beautiful scenery both on its land and on the seabed, making it a favorite tourist in Bengkulu. The purpose of this study was to determine the condition of "1"3"7Cs in sediments and its relation to the total carbon and sediment texture. Sediment sampling carried out on 17 September 2014 at six stations where three stations are still relatively close to The Rat island with water depth of ≤ 1 m and 3 others are far from Rat Island waters with a depth of 14-18 meters. Sediment texture and TOC content at waters depth of ≤ 1 m is sand and its TOC contents were <5.5%. On other hand at water depth of 14-18 meters sediment texture are silt sand mixture and the TOC content were ≥6%. The concentration of "1"3"7Cs in sediment were influenced by texture characteristic and TOC content. (author) 20. High-velocity basal sediment package atop oceanic crust, offshore Cascadia: Impacts on plate boundary processes and fluid migration Science.gov (United States) Peterson, D. E.; Keranen, K. M. 2017-12-01 Differences in fluid pressure and mechanical properties at megathrust boundaries in subduction zones have been proposed to create varying seismogenic behavior. In Cascadia, where large ruptures are possible but little seismicity occurs presently, new seismic transects across the deformation front (COAST cruise; Holbrook et al., 2012) image an unusually high-wavespeed sedimentary unit directly overlying oceanic crust. Wavespeed increases before sediments reach the deformation front, and the well-laminated unit, consistently of 1 km thickness, can be traced for 50 km beneath the accretionary prism before imaging quality declines. Wavespeed is modeled via iterative prestack time migration (PSTM) imaging and increases from 3.5 km/sec on the seaward end of the profile to >5.0 km/s near the deformation front. Landward of the deformation front, wavespeed is low along seaward-dipping thrust faults in the Quaternary accretionary prism, indicative of rapid dewatering along faults. The observed wavespeed of 5.5 km/sec just above subducting crust is consistent with porosity intersects the plate boundary at an oblique angle and changes the degree of hydration of the oceanic plate as it subducts within our area. Fluid flow out of oceanic crust is likely impeded by the low-porosity basal sediment package except along the focused thrust faults. Decollements are present at the top of oceanic basement, at the top of the high-wavespeed basal unit, and within sedimentary strata at higher levels; the decollement at the top of oceanic crust is active at the toe of the deformation front. The basal sedimentary unit appears to be mechanically strong, similar to observations from offshore Sumatra, where strongly consolidated sediments at the deformation front are interpreted to facilitate megathrust rupture to the trench (Hupers et al., 2017). A uniformly strong plate interface at Cascadia may inhibit microseismicity while building stress that is released in great earthquakes. 1. Clay mineralogical and Sr, Nd isotopic investigations in two deep-sea sediment cores from Northeast Indian Ocean International Nuclear Information System (INIS) Anil Babu, G.; Masood Ahmad, S.; Padmakumari, V.M.; Dayal, A.M. 2004-01-01 Sr and Nd isotopic studies in terrigenous component of the ocean sediments provide useful information about weathering patterns near source rock and climatic conditions existed on the continents. Variations in 87 Sr/ 86 Sr and 143 Nd/ 144 Nd isotopic ratios in clastic sediments depend on the source from the continents, volcanic input and circulation changes. The composition of clay minerals mainly depends on climate, geology and topography of the surrounding region. Chlorite and Illite are formed under physical weathering in arid cold climate and kaolinite and smectite are the characteristic products of chemical weathering in humid wet climatic conditions. Therefore, the variations in clay mineral composition in deep-sea sediments can be interpreted in terms of changes in the climatic conditions prevailed in the continental source areas 2. Application of sediment core modelling to interpreting the glacial-interglacial record of Southern Ocean silica cycling Directory of Open Access Journals (Sweden) A. Ridgwell 2007-07-01 Full Text Available Sediments from the Southern Ocean reveal a meridional divide in biogeochemical cycling response to the glacial-interglacial cycles of the late Neogene. South of the present-day position of the Antarctic Polar Front in the Atlantic sector of the Southern Ocean, biogenic opal is generally much more abundant in sediments during interglacials compared to glacials. To the north, an anti-phased relationship is observed, with maximum opal abundance instead occurring during glacials. This antagonistic response of sedimentary properties provides an important model validation target for testing hypotheses of glacial-interglacial change against, particularly for understanding the causes of the concurrent variability in atmospheric CO2. Here, I illustrate a time-dependent modelling approach to helping understand climates of the past by means of the mechanistic simulation of marine sediment core records. I find that a close match between model-predicted and observed down-core changes in sedimentary opal content can be achieved when changes in seasonal sea-ice extent are imposed, whereas the predicted sedimentary response to iron fertilization on its own is not consistent with sedimentary observations. The results of this sediment record model-data comparison supports previous inferences that the changing cryosphere is the primary driver of the striking features exhibited by the paleoceanographic record of this region. 3. Enhanced ocean carbon storage from anaerobic alkalinity generation in coastal sediments NARCIS (Netherlands) Thomas, H.; Schiettecatte, L.-S.; Suykens, K.; Koné, Y.J.M.; Shadwick, E.H.; Prowe, A.E.F.; Bozec, Y.; Baar, H.J.W. de; Borges, A.V.; Slomp, C. 2009-01-01 The coastal ocean is a crucial link between land, the open ocean and the atmosphere. The shallowness of the water column permits close interactions between the sedimentary, aquatic and atmospheric compartments, which otherwise are decoupled at long time scales (≅ 1000 yr) in the open oceans. Despite 4. Antagonistic Effects of Ocean Acidification and Rising Sea Surface Temperature on the Dissolution of Coral Reef Carbonate Sediments Directory of Open Access Journals (Sweden) Daniel Trnovsky 2016-11-01 Full Text Available Increasing atmospheric CO2 is raising sea surface temperature (SST and increasing seawater CO2 concentrations, resulting in a lower oceanic pH (ocean acidification; OA, which is expected to reduce the accretion of coral reef ecosystems. Although sediments comprise most of the calcium carbonate (CaCO3 within coral reefs, no in situ studies have looked at the combined effects of increased SST and OA on the dissolution of coral reef CaCO3 sediments. In situ benthic chamber incubations were used to measure dissolution rates in permeable CaCO3 sands under future OA and SST scenarios in a coral reef lagoon on Australia’s Great Barrier Reef (Heron Island. End of century (2100 simulations (temperature +2.7°C and pH -0.3 shifted carbonate sediments from net precipitating to net dissolving. Warming increased the rate of benthic respiration (R by 29% per 1°C and lowered the ratio of productivity to respiration (P/R; ΔP/R = -0.23, which increased the rate of CaCO3 sediment dissolution (average net increase of 18.9 mmol CaCO3 m-2 d-1 for business as usual scenarios. This is most likely due to the influence of warming on benthic P/R which, in turn, was an important control on sediment dissolution through the respiratory production of CO2. The effect of increasing CO2 on CaCO3 sediment dissolution (average net increase of 6.5 mmol CaCO3 m-2 d-1 for business as usual scenarios was significantly less than the effect of warming. However, the combined effect of increasing both SST and pCO2 on CaCO3 sediment dissolution was non-additive (average net increase of 5.6 mmol CaCO3 m-2 d-1 due to the different responses of the benthic community. This study highlights that benthic biogeochemical processes such as metabolism and associated CaCO3 sediment dissolution respond rapidly to changes in SST and OA, and that the response to multiple environmental changes are not necessarily additive. 5. Cohesive and mixed sediment in the Regional Ocean Modeling System (ROMS v3.6 implemented in the Coupled Ocean–Atmosphere–Wave–Sediment Transport Modeling System (COAWST r1234 Directory of Open Access Journals (Sweden) C. R. Sherwood 2018-05-01 Full Text Available We describe and demonstrate algorithms for treating cohesive and mixed sediment that have been added to the Regional Ocean Modeling System (ROMS version 3.6, as implemented in the Coupled Ocean–Atmosphere–Wave–Sediment Transport Modeling System (COAWST Subversion repository revision 1234. These include the following: floc dynamics (aggregation and disaggregation in the water column; changes in floc characteristics in the seabed; erosion and deposition of cohesive and mixed (combination of cohesive and non-cohesive sediment; and biodiffusive mixing of bed sediment. These routines supplement existing non-cohesive sediment modules, thereby increasing our ability to model fine-grained and mixed-sediment environments. Additionally, we describe changes to the sediment bed layering scheme that improve the fidelity of the modeled stratigraphic record. Finally, we provide examples of these modules implemented in idealized test cases and a realistic application. 6. Ocean-atmosphere dynamics during Hurricane Ida and Nor'Ida: An application of the coupled ocean-;atmosphere–wave–sediment transport (COAWST) modeling system Science.gov (United States) Olabarrieta, Maitane; Warner, John C.; Armstrong, Brandy N.; Zambon, Joseph B.; He, Ruoying 2012-01-01 The coupled ocean–atmosphere–wave–sediment transport (COAWST) modeling system was used to investigate atmosphere–ocean–wave interactions in November 2009 during Hurricane Ida and its subsequent evolution to Nor'Ida, which was one of the most costly storm systems of the past two decades. One interesting aspect of this event is that it included two unique atmospheric extreme conditions, a hurricane and a nor'easter storm, which developed in regions with different oceanographic characteristics. Our modeled results were compared with several data sources, including GOES satellite infrared data, JASON-1 and JASON-2 altimeter data, CODAR measurements, and wave and tidal information from the National Data Buoy Center (NDBC) and the National Tidal Database. By performing a series of numerical runs, we were able to isolate the effect of the interaction terms between the atmosphere (modeled with Weather Research and Forecasting, the WRF model), the ocean (modeled with Regional Ocean Modeling System (ROMS)), and the wave propagation and generation model (modeled with Simulating Waves Nearshore (SWAN)). Special attention was given to the role of the ocean surface roughness. Three different ocean roughness closure models were analyzed: DGHQ (which is based on wave age), TY2001 (which is based on wave steepness), and OOST (which considers both the effects of wave age and steepness). Including the ocean roughness in the atmospheric module improved the wind intensity estimation and therefore also the wind waves, surface currents, and storm surge amplitude. For example, during the passage of Hurricane Ida through the Gulf of Mexico, the wind speeds were reduced due to wave-induced ocean roughness, resulting in better agreement with the measured winds. During Nor'Ida, including the wave-induced surface roughness changed the form and dimension of the main low pressure cell, affecting the intensity and direction of the winds. The combined wave age- and wave steepness 7. Ocean-atmosphere dynamics during Hurricane Ida and Nor'Ida: An application of the coupled ocean-atmosphere-wave-sediment transport (COAWST) modeling system Science.gov (United States) Olabarrieta, Maitane; Warner, John C.; Armstrong, Brandy N.; Zambon, Joseph B.; He, Ruoying 2012-01-01 The coupled ocean–atmosphere–wave–sediment transport (COAWST) modeling system was used to investigate atmosphere–ocean–wave interactions in November 2009 during Hurricane Ida and its subsequent evolution to Nor’Ida, which was one of the most costly storm systems of the past two decades. One interesting aspect of this event is that it included two unique atmospheric extreme conditions, a hurricane and a nor’easter storm, which developed in regions with different oceanographic characteristics. Our modeled results were compared with several data sources, including GOES satellite infrared data, JASON-1 and JASON-2 altimeter data, CODAR measurements, and wave and tidal information from the National Data Buoy Center (NDBC) and the National Tidal Database. By performing a series of numerical runs, we were able to isolate the effect of the interaction terms between the atmosphere (modeled with Weather Research and Forecasting, the WRF model), the ocean (modeled with Regional Ocean Modeling System (ROMS)), and the wave propagation and generation model (modeled with Simulating Waves Nearshore (SWAN)). Special attention was given to the role of the ocean surface roughness. Three different ocean roughness closure models were analyzed: DGHQ (which is based on wave age), TY2001 (which is based on wave steepness), and OOST (which considers both the effects of wave age and steepness). Including the ocean roughness in the atmospheric module improved the wind intensity estimation and therefore also the wind waves, surface currents, and storm surge amplitude. For example, during the passage of Hurricane Ida through the Gulf of Mexico, the wind speeds were reduced due to wave-induced ocean roughness, resulting in better agreement with the measured winds. During Nor’Ida, including the wave-induced surface roughness changed the form and dimension of the main low pressure cell, affecting the intensity and direction of the winds. The combined wave age- and wave steepness 8. Sedimentation Science.gov (United States) Cliff R. Hupp; Michael R. Schening 2000-01-01 Sedimentation is arguably the most important water-quality concern in the United States. Sediment trapping is cited frequently as a major function of riverine-forested wetlands, yet little is known about sedimcntation rates at the landscape scale in relation to site parameters, including woody vegetation type, elevation, velocity, and hydraulic connection to the river... 9. The Atlantic Coast of Maryland, Sediment Budget Update: Tier 2, Assateague Island and Ocean City Inlet Science.gov (United States) 2016-06-01 111 – Rivers and Harbors Act), the navigational structures at the Ocean City Inlet, and a number of Federally authorized channels (Figure 1). Reed...Tier 2, Assateague Island and Ocean City Inlet by Ernest R. Smith, Joseph C. Reed, and Ian L. Delwiche PURPOSE: This Coastal and Hydraulics...of the Atlantic Ocean shoreline within the U.S. Army Corps of Engineers (USACE) Baltimore District’s Area of Responsibility, which for coastal 10. Inventory of 226Ra, 228Ra and 210Pb in marine sediments cores of Southwest Atlantic Ocean International Nuclear Information System (INIS) Costa, Alice M.R.; Oliveira, Joselene de; Figueira, Rubens C.L.; Mahiques, Michel M.; Sousa, Silvia H.M. 2015-01-01 210 Pb (22.3 y) is a radioactive isotope successfully applied as tracer of sediment dating of the last 100-150 years. The application of 226 Ra and 228 Ra as paleoceanographic tracers (half-lives of 1,600 y and 5.7 y, respectively) also gives some information of ocean's role in past climate change. In this work, it was analyzed 2 sediment cores collect at Southwest Atlantic Ocean. The sediments samples were freeze-dried and acid digested in microwave. It was carried out a radiochemical separation of 226 Ra, 228 Ra and 210 Pb and performed a gross alpha and gross beta measurement of both precipitates Ba(Ra)SO 4 and PbCrO 4 in a low background gas-flow proportional counter. Activity concentrations of 226 Ra ranged from 45 Bq kg -1 to 70 Bq kg -1 in NAP-62 and from 57 Bq kg -1 to 82 Bq kg -1 in NAP-63 samples. The concentration of 228 Ra varied between 37 Bq kg -1 and 150 Bq kg -1 in NAP-62 and between 23 Bq kg -1 and 111 Bq kg -1 in NAP-63 samples. The concentration of total 210 Pb ranged from 126 Bq kg -1 to 256 Bq kg -1 in NAP-62 and from 63 Bq kg -1 to 945 Bq kg -1 in NAP-63 samples. Results of 210 Pb uns varied from 68 Bq kg -1 to 192 Bq kg -1 for NAP-62, while varied from <4.9 Bq kg -1 to 870 Bq kg -1 in NAP-63 profile. Increased values of 210 Pb uns were found on the top of both NAP-62 and NAP- 63 sediment profile. (author) 11. Organic carbon and nitrogen in the surface sediments of world oceans and seas: distribution and relationship to bottom topography Energy Technology Data Exchange (ETDEWEB) Premuzic, E.T. 1980-06-01 Information dealing with the distribution of organic carbon and nitrogen in the top sediments of world oceans and seas has been gathered and evaluated. Based on the available information a master chart has been constructed which shows world distribution of sedimentary organic matter in the oceans and seas. Since organic matter exerts an influence upon the settling properties of fine inorganic particles, e.g. clay minerals and further, the interaction between organic matter and clay minerals is maximal, a relationship between the overall bottom topography and the distribution of clay minerals and organic matter should be observable on a worldwide basis. Initial analysis of the available data indicates that such a relationship does exist and its significance is discussed. 12. The effects of wind and rainfall on suspended sediment concentration related to the 2004 Indian Ocean tsunami International Nuclear Information System (INIS) Zhang Xinfeng; Tang Danling; Li Zizhen; Zhang Fengpan 2009-01-01 The effects of rainfall and wind speed on the dynamics of suspended sediment concentration (SSC), during the 2004 Indian Ocean tsunami, were analyzed using spatial statistical models. The results showed a positive effect of wind speed on SSC, and inconsistent effects (positive and negative) of rainfall on SSC. The effects of wind speed and rainfall on SSC weakened immediately around the tsunami, indicating tsunami-caused floods and earthquake-induced shaking may have suddenly disturbed the ocean-atmosphere interaction processes, and thus weakened the effects of wind speed and rainfall on SSC. Wind speed and rainfall increased markedly, and reached their maximum values immediately after the tsunami week. Rainfall at this particular week exceeded twice the average for the same period over the previous 4 years. The tsunami-affected air-sea interactions may have increased both wind speed and rainfall immediately after the tsunami week, which directly lead to the variations in SSC. 13. A new XRF probe for in-situ determining concentration of multi-elements in ocean sediments CERN Document Server Ge Liang Quan; Zhou Si Chun; Lin Ling; Lin Yan Chang; Ren Jia Fu 2001-01-01 The author introduces a new X-ray fluorescence probe for in-situ determining the concentration of multi-elements in ocean sediments. The probe consists of Si-Pin X-ray detector with an electro-thermal colder, two isotope sources, essential electrical signal processing units and a notebook computer. More than 10 elements can be simultaneously determined at a detection limit of (10-200) x 10 sup - sup 6 and precision of 5%-30% without liquid Nitrogen supply. tests show that the probe can perform the analytical tasks under the water at the depth of less than 1000 meters 14. A new XRF probe for in-situ determining concentration of multi-elements in ocean sediments International Nuclear Information System (INIS) Ge Liangquan; Lai Wanchang; Zhou Sichun; Lin Ling; Lin Yanchang; Ren Jiafu 2001-01-01 The author introduces a new X-ray fluorescence probe for in-situ determining the concentration of multi-elements in ocean sediments. The probe consists of Si-Pin X-ray detector with an electro-thermal colder, two isotope sources, essential electrical signal processing units and a notebook computer. More than 10 elements can be simultaneously determined at a detection limit of (10-200) x 10 -6 and precision of 5%-30% without liquid Nitrogen supply. tests show that the probe can perform the analytical tasks under the water at the depth of less than 1000 meters 15. Open ocean pelago-benthic coupling: cyanobacteria as tracers of sedimenting salp faeces Science.gov (United States) Pfannkuche, Olaf; Lochte, Karin 1993-04-01 Coupling between surface water plankton and abyssal benthos was investigated during a mass development of salps ( Salpa fusiformis) in the Northeast Atlantic. Cyanobacteria numbers and composition of photosynthetic pigments were determined in faeces of captured salps from surface waters, sediment trap material, detritus from plankton hauls, surface sediments from 4500-4800 m depth and Holothurian gut contents. Cyanobacteria were found in all samples containing salp faeces and also in the guts of deep-sea Holothuria. The ratio between zeaxanthin (typical of cyanobacteria) and sum of chlorophyll a pigments was higher in samples from the deep sea when compared to fresh salp faeces, indicating that this carotenoid persisted longer in the sedimenting material than total chlorophyll a pigments. The microscopic and chemical observations allowed us to trace sedimenting salp faeces from the epipelagial to the abyssal benthos, and demonstrated their role as a fast and direct link between both systems. Cyanobacteria may provide a simple tracer for sedimenting phytodetritus. 16. Modelling the morphology of sandy spits DEFF Research Database (Denmark) Pedersen, Dorthe; Deigaard, Rolf; Fredsøe, Jørgen 2008-01-01 The shape, dimensions and growth rate of an accumulating sandy spit is investigated by a theoretical and experimental study. The idealised case of a spit growing without change of form under a constant wave forcing is considered. The longshore wave-driven sediment transport is taken to be dominant...... that with this assumption the dimensions of the spit cannot be determined. The width and shape of a finite spit is therefore determined from simulations with an area model for the wave-driven current and sediment transport along the spit. In this case the curvature effects from the spit on the longshore sediment transport...... conducted in a wave tank an accumulating spit was formed at the down-drift end of a uniform stretch of coast exposed to waves approaching at an angle. The spit approached equilibrium dimensions when a constant wave climate was applied. The radius of curvature of the spit varied according to the height... 17. Bacteria, taxonomic code, and other data collected from G.W. PIERCE in North Atlantic Ocean from sediment sampler; 20 February 1976 to 23 March 1976 (NODC Accession 7601642) Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — Bacteria, taxonomic code, and other data were collected using sediment sampler and other instruments in the North Atlantic Ocean from G.W. PIERCE. Data were... 18. Hydrothermal Alteration Promotes Humic Acid Formation in Sediments: A Case Study of the Central Indian Ocean Basin Science.gov (United States) Sarma, Nittala S.; Kiran, Rayaprolu; Rama Reddy, M.; Iyer, Sridhar D.; Peketi, A.; Borole, D. V.; Krishna, M. S. 2018-01-01 Anomalously high concentrations of humic-rich dissolved organic matter (DOM) in extant submarine hydrothermal vent plumes traveled far from source are increasingly being reported. This DOM, able to mobilize trace metals (e.g., Fe2+) has been hypothesized as originating from organic matter produced by thermogenic bacteria. To eliminate a possible abiogenic origin of this DOM, study is required of well-preserved organic compounds that can be attributed to thermogenic bacteria. The Central Indian Ocean Basin (CIOB) is part of a diffuse plate boundary and an intraplate deformation zone. Coarse fraction (>63 µ) characteristics, mineralogy, magnetic susceptibility, and geochemistry were examined in sediments of a core raised close to a north-south fracture zone near the Equator. Two horizons of distinctly brown-colored sediments were shown as hydrothermally altered from their charred fragments and geochemistry (CaCO3, Corg, Ti/Al, Al/(Al + Fe + Mn), Sr/Ba, Mg/Li, Mn micronodules, Fe/Mn). We examined whether humic substances were preserved in these sediments, and if so whether their carbon isotope distribution would support their hydrothermal origin. Alkali extraction of sediments afforded humic acids (HA) in yields up to 1.2% in the brown sediments. The remaining portions of the core had nil or low concentrations of HA. The carbon of hydrothermal HA is isotopically heavier (average δ13C, ˜ -16.3‰) compared to nonhydrothermal HA (-18.1‰), suggesting that they were probably formed from organic matter that remained after elimination of lighter carbon enriched functional groups during diagenesis. The results provide compelling evidence of HA formation from lipids originating from thermogenic bacteria. 19. Efficacy of 230Th normalization in sediments from the Juan de Fuca Ridge, northeast Pacific Ocean Science.gov (United States) Costa, Kassandra; McManus, Jerry 2017-01-01 230Th normalization is an indispensable method for reconstructing sedimentation rates and mass fluxes over time, but the validity of this approach has generated considerable debate in the paleoceanographic community. 230Th systematics have been challenged with regards to grain size bias, sediment composition (CaCO3), water column advection, and other processes. In this study, we investigate the consequences of these effects on 230Th normalization from a suite of six cores on the Juan de Fuca Ridge. The proximity of these cores (carbonate preservation, both of which may limit the usage of 230Th in this region. Despite anticipated complications, 230Th normalization effectively reconstructs nearly identical particle rain rates from all six cores, which are summarily unrelated to the total sedimentation rates as calculated from the age models. Instead the total sedimentation rates are controlled almost entirely by sediment focusing and winnowing, which are highly variable even over the short spatial scales investigated in this study. Furthermore, no feedbacks on 230Th systematics were detected as a consequence of sediment focusing, coarse fraction variability, or calcium carbonate content, supporting the robustness of the 230Th normalization technique. 20. Estimation of potentially contaminated sediment volume in cases of oil spill in a summer conditions to sandy beaches of Rio Grande do Sul, Brazil; Estimativa do volume sedimentar potencialmente contaminado em casos de derrame de oleo em condicoes de verao para praias arenosas do Rio Grande do Sul Energy Technology Data Exchange (ETDEWEB) Costi, Juliana; Calliari, Lauro J. [Fundacao Universidade Federal do Rio Grande (FURG), RS (Brazil) 2008-07-01 Field experiments relating oil of two different densities with sediment penetration along ocean beaches with distinct morphodynamic behavior along the RS coastline indicates that, for both types of oil, higher penetration is associated to beaches which display higher mean grain size. Based on penetration depth it is possible to estimate the volume of contaminated sediments due to oil spills that eventually can reach the coast. Sediment cores sampled at 80 days interval at two different places characterized by a dissipative and a intermediate beaches indicate a higher variation of the sediment parameters and volume associated to the intermediate beach. (author) 1. Sedimentation Digital Repository Service at National Institute of Oceanography (India) Rixen, T.; Guptha, M.V.S.; Ittekkot, V. opal ratios. Such changes are assumed to have lowered the atmospheric CO sub(2) concentration significantly during glacial times. The differences between estimated deep ocean fluxes derived from satellite data and measured deep fluxes are lower than... 2. Basalt microlapilli in deep sea sediments of Indian Ocean in the vicinity of Vityaz fracture zone Digital Repository Service at National Institute of Oceanography (India) Nath, B.N.; Iyer, S.D. Two cores recovered from the flanks of Mid-India oceanic ridge in the vicinity of Vityaz fracture zone consist of discrete pyroclastic layers at various depths. These layers are composed of coarse-grained, angular basaltic microlapilli in which... 3. Development of an assessment methodology for the disposal of high-level radioactive waste into deep ocean sediments International Nuclear Information System (INIS) Murray, C.N.; Stanners, D.A. 1982-01-01 This paper presents the results of a theoretical study concerning the option of disposal of vitrified high activity waste (HAW) into deep ocean sediments. The development of a preliminary methodology is presented which concerns the assessment of the possible effects of a release of radioactivity on the ecosystem and eventually on man. As the long-term hazard is considered basically to be due to transuranic elements (and daughter products) the period studied for the assessment is from 10 3 to 10 6 years. A simple ecosystem model is developed so that the transfer of activity between different compartments of the systems, e.g. the sediment column, sediment-water interface, deep sea water column, can be estimated. A critical pathway analysis is made for an imaginary critical group in order to complete the assessment. A sensitivity analysis is undertaken using the computed minimum-maximum credible values for the different parameters used in the calculations in order to obtain a minimum-maximum dose range for a critical group. (Auth.) 4. Enhanced ocean carbon storage from anaerobic alkalinity generation in coastal sediments Directory of Open Access Journals (Sweden) H. Thomas 2009-02-01 Full Text Available The coastal ocean is a crucial link between land, the open ocean and the atmosphere. The shallowness of the water column permits close interactions between the sedimentary, aquatic and atmospheric compartments, which otherwise are decoupled at long time scales (≅ 1000 yr in the open oceans. Despite the prominent role of the coastal oceans in absorbing atmospheric CO2 and transferring it into the deep oceans via the continental shelf pump, the underlying mechanisms remain only partly understood. Evaluating observations from the North Sea, a NW European shelf sea, we provide evidence that anaerobic degradation of organic matter, fuelled from land and ocean, generates total alkalinity (AT and increases the CO2 buffer capacity of seawater. At both the basin wide and annual scales anaerobic AT generation in the North Sea's tidal mud flat area irreversibly facilitates 7–10%, or taking into consideration benthic denitrification in the North Sea, 20–25% of the North Sea's overall CO2 uptake. At the global scale, anaerobic AT generation could be accountable for as much as 60% of the uptake of CO2 in shelf and marginal seas, making this process, the anaerobic pump, a key player in the biological carbon pump. Under future high CO2 conditions oceanic CO2 storage via the anaerobic pump may even gain further relevance because of stimulated ocean productivity. 5. Coral reef sedimentation on Rodrigues and the Western Indian Ocean and its impact on the carbon cycle. Science.gov (United States) Rees, Siwan A; Opdyke, Bradley N; Wilson, Paul A; Fifield, L Keith 2005-01-15 Coral reefs in the southwest Indian Ocean cover an area of ca. 18,530 km2 compared with a global reef area of nearly 300,000 km2. These regions are important as fishing grounds, tourist attractions and as a significant component of the global carbon cycle. The mass of calcium carbonate stored within Holocene neritic sediments is a number that we are only now beginning to quantify with any confidence, in stark contrast to the mass and sedimentation rates associated with pelagic calcium carbonate, which have been relatively well defined for decades. We report new data that demonstrate that the reefs at Rodrigues, like those at Reunion and Mauritius, only reached a mature state (reached sea level) by 2-3 ka: thousands of years later than most of the reefs in the Australasian region. Yet field observations show that the large lagoon at Rodrigues is already completely full of carbonate detritus (typical lagoon depth less than 1 m at low spring tide). The presence of aeolian dunes at Rodrigues indicates periodic exposure of past lagoons throughout the Pleistocene. The absence of elevated Pleistocene reef deposits on the island indicates that the island has not been uplifted. Most Holocene reefs are between 15 and 20 m in thickness and those in the southwest Indian Ocean appear to be consistent with this observation. We support the view that the CO2 flux associated with coral-reef growth acts as a climate change amplifier during deglaciation, adding CO2 to a warming world. southwest Indian Ocean reefs could have added 7-10% to this global flux during the Holocene. 6. Fatty acids in sediments and phytoplankton data were collected from the Equatorial Pacific Ocean as part of the Joint Global Ocean Flux Study/Equatorial Pacific Basin Study (JGOFS/EQPAC) project., from 1992-02-03 to 1992-12-13 (NODC Accession 9700180) Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — Fatty acids in sediments and phytoplankton data were collected using plankton tow, sediments sampler - corer, pump and CTD casts from the R/V THOMAS THOMPSON in the... 7. Literature Review of Unconsolidated Sediment in San Francisco Bay and Nearby Pacific Ocean Coast Directory of Open Access Journals (Sweden) Barry R. Keller 2009-09-01 Full Text Available A review of the geologic literature regarding sedimentation in the San Francisco Bay estuarine system shows that the main part of the bay occupies a structural tectonic depression that developed in Pleistocene time. Eastern parts, including San Pablo Bay and Suisun Bay, have had sedimentation throughout late Mesozoic and Tertiary. Carquinez Strait and the Golden Gate may represent antecedent stream erosion. Sedimentation has included estuarine, alluvial, and eolian deposition. The ages of estuarine deposition includes the modern high sea level stand and earlier Pleistocene interglacial periods. Sediment sources can be generally divided into the Coast Ranges, particularly the Franciscan Complex, and “Sierran.” Much of the estuarine system is floored by very fine sediment, with local areas of sand floor. Near the Golden Gate, sediment size decreases in both directions away from the deep channel. Bedforms include sand waves (submarine dunes, flat beds, and rock and boulders. These are interpreted in terms of dominant transport directions. Near the Golden Gate is an ebb-tidal delta on the outside (including San Francisco bar and a flood-tidal delta on the inside (parts of Central Bay. The large tidal prism causes strong tidal currents, which in the upper part of the estuary are normally much stronger than river currents, except during large floods. Cultural influences have altered conditions, including hydraulic mining debris, blasting of rocks, dredging of navigation channels, filling of the bay, and commercial sand mining. Many of these have served to decrease the tidal prism, correspondingly decreasing the strength of tidal currents. 8. The Cenozoic western Svalbard margin: sediment geometry and sedimentary processes in an area of ultraslow oceanic spreading Science.gov (United States) Amundsen, Ingrid Marie Hasle; Blinova, Maria; Hjelstuen, Berit Oline; Mjelde, Rolf; Haflidason, Haflidi 2011-12-01 The northeastern high-latitude North Atlantic is characterised by the Bellsund and Isfjorden fans on the continental slope off west Svalbard, the asymmetrical ultraslow Knipovich spreading ridge and a 1,000 m deep rift valley. Recently collected multichannel seismic profiles and bathymetric records now provide a more complete picture of sedimentary processes and depositional environments within this region. Both downslope and alongslope sedimentary processes are identified in the study area. Turbidity currents and deposition of glacigenic debris flows are the dominating downslope processes, whereas mass failures, which are a common process on glaciated margins, appear to have been less significant. The slide debrite observed on the Bellsund Fan is most likely related to a 2.5-1.7 Ma old failure on the northwestern Barents Sea margin. The seismic records further reveal that alongslope current processes played a major role in shaping the sediment packages in the study area. Within the Knipovich rift valley and at the western rift flank accumulations as thick as 950-1,000 m are deposited. We note that oceanic basement is locally exposed within the rift valley, and that seismostratigraphic relationships indicate that fault activity along the eastern rift flank lasted until at least as recently as 1.5 Ma. A purely hemipelagic origin of the sediments in the rift valley and on the western rift flank is unlikely. We suggest that these sediments, partly, have been sourced from the western Svalbard—northwestern Barents Sea margin and into the Knipovich Ridge rift valley before continuous spreading and tectonic activity caused the sediments to be transported out of the valley and westward. 9. Velocity-porosity relationships for slope apron and accreted sediments in the Nankai Trough Seismogenic Zone Experiment, Integrated Ocean Drilling Program Expedition 315 Site C0001 Science.gov (United States) Hashimoto, Y.; Tobin, H. J.; Knuth, M. 2010-12-01 In this study, we focused on the porosity and compressional wave velocity of marine sediments to examine the physical properties of the slope apron and the accreted sediments. This approach allows us to identify characteristic variations between sediments being deposited onto the active prism and those deposited on the oceanic plate and then carried into the prism during subduction. For this purpose we conducted ultrasonic compressional wave velocity measurements on the obtained core samples with pore pressure control. Site C0001 in the Nankai Trough Seismogenic Zone Experiment transect of the Integrated Ocean Drilling Program is located in the hanging wall of the midslope megasplay thrust fault in the Nankai subduction zone offshore of the Kii peninsula (SW Japan), penetrating an unconformity at ˜200 m depth between slope apron sediments and the underlying accreted sediments. We used samples from Site C0001. Compressional wave velocity from laboratory measurements ranges from ˜1.6 to ˜2.0 km/s at hydrostatic pore pressure conditions estimated from sample depth. The compressional wave velocity-porosity relationship for the slope apron sediments shows a slope almost parallel to the slope for global empirical relationships. In contrast, the velocity-porosity relationship for the accreted sediments shows a slightly steeper slope than that of the slope apron sediments at 0.55 of porosity. This higher slope in the velocity-porosity relationship is found to be characteristic of the accreted sediments. Textural analysis was also conducted to examine the relationship between microstructural texture and acoustic properties. Images from micro-X-ray CT indicated a homogeneous and well-sorted distribution of small pores both in shallow and in deeper sections. Other mechanisms such as lithology, clay fraction, and abnormal fluid pressure were found to be insufficient to explain the higher velocity for accreted sediments. The higher slope in velocity-porosity relationship for 10. Biogenic silica in space and time in sediments of Central Indian Ocean Digital Repository Service at National Institute of Oceanography (India) Pattan, J.N.; Gupta, S.M.; Mudholkar, A.V.; Parthiban, G. rate averages 2.25 x 10/5 g.cm/2.y/1 and it is contributed from 33 to 50% of the total silica. Higher biogenic silica content of the surface sediment is well correlated with Mn, Cu and Ni concentration of the overlying manganese nodules. Higher biogenic... 11. Geochemistry of deep-sea sediment cores from the Central Indian Ocean Basin Digital Repository Service at National Institute of Oceanography (India) Mudholkar, A.V.; Pattan, J.N.; Parthiban, G. , thought to be of diagenetic origin. Metals are suppliEd. by upward migration from a suboxic to anoxic zone at an intermediate depth of 12-35 cm below the sediment-water interface in all the cores. Buried maxima in transition metal concentration at depth... 12. Inventory of {sup 226}Ra, {sup 228}Ra and {sup 210}Pb in marine sediments cores of Southwest Atlantic Ocean Energy Technology Data Exchange (ETDEWEB) Costa, Alice M.R.; Oliveira, Joselene de, E-mail: [email protected], E-mail: [email protected] [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil). Gerencia de Metrologia das Radiacoes. Lab. de Radiometria Ambiental; Figueira, Rubens C.L.; Mahiques, Michel M.; Sousa, Silvia H.M., E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Universidade de Sao Paulo (USP), Sao Paulo, SP (Brazil). Instituto Oceanografico 2015-07-01 {sup 210}Pb (22.3 y) is a radioactive isotope successfully applied as tracer of sediment dating of the last 100-150 years. The application of {sup 226}Ra and {sup 228}Ra as paleoceanographic tracers (half-lives of 1,600 y and 5.7 y, respectively) also gives some information of ocean's role in past climate change. In this work, it was analyzed 2 sediment cores collect at Southwest Atlantic Ocean. The sediments samples were freeze-dried and acid digested in microwave. It was carried out a radiochemical separation of {sup 226}Ra, {sup 228}Ra and {sup 210}Pb and performed a gross alpha and gross beta measurement of both precipitates Ba(Ra)SO{sub 4} and PbCrO{sub 4} in a low background gas-flow proportional counter. Activity concentrations of {sup 226}Ra ranged from 45 Bq kg{sup -1} to 70 Bq kg{sup -1} in NAP-62 and from 57 Bq kg{sup -1} to 82 Bq kg{sup -1} in NAP-63 samples. The concentration of {sup 228}Ra varied between 37 Bq kg{sup -1} and 150 Bq kg{sup -1} in NAP-62 and between 23 Bq kg{sup -1} and 111 Bq kg{sup -1} in NAP-63 samples. The concentration of total {sup 210}Pb ranged from 126 Bq kg{sup -1} to 256 Bq kg{sup -1} in NAP-62 and from 63 Bq kg{sup -1} to 945 Bq kg{sup -1} in NAP-63 samples. Results of {sup 210}Pb{sub uns} varied from 68 Bq kg{sup -1} to 192 Bq kg{sup -1} for NAP-62, while varied from <4.9 Bq kg{sup -1} to 870 Bq kg{sup -1} in NAP-63 profile. Increased values of {sup 210}Pb{sub uns} were found on the top of both NAP-62 and NAP- 63 sediment profile. (author) 13. 2014 U.S. Geological Survey CMGP LiDAR: Post Sandy (New Jersey) Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — TASK NAME: USGS New Jersey CMGP Sandy Lidar 0.7 Meter NPS LIDAR lidar Data Acquisition and Processing Production Task USGS Contract No. G10PC00057 Task Order No.... 14. 2012 USGS EAARL-B Coastal Topography: Post-Sandy, First Surface (NJ) Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — ASCII xyz and binary point-cloud data, as well as a digital elevation model (DEM) of a portion of the New Jersey coastline, pre- and post-Hurricane Sandy (October... 15. 2012-2013 Post-Hurricane Sandy EAARL-B Submerged Topography - Barnegat Bay, New Jersey Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — Binary point-cloud data for part of Barnegat Bay, New Jersey, post-Hurricane Sandy (October 2012 hurricane), were produced from remotely sensed, geographically... 16. Sediment distribution in the oceans : the Atlantic between 10° and 19°N NARCIS (Netherlands) Collette, B.J.; Ewing, J.I.; Lagaay, R.A.; Truchan, M. Between 10° and 19°N the North Atlantic Ocean has been covered by four east-west crossings and one north-south section at 60°W, using a continuous seismic reflection recorder (air gun). The northernmost section extends to the Canary Islands. The region comprises a great variety of phenomena: 17. Ferromanganese nodules and their associated sediments from the Central Indian Ocean Basin: Rare earth element geochemistry Digital Repository Service at National Institute of Oceanography (India) Pattan, J.N.; Rao, Ch.M.; Migdisov, A.A.; Colley, S.; Higgs, N.C.; Demidenko, L. FerromanganeseNodulesandtheirAssociatedSedimentsfromtheCentralIndianOceanBasin:RareEarthElementGeochemistry J.N.PATTANCH.M.RAONationalInstituteofOceanography,DonaPaula Goa,IndiaA.A.MIGDISOV InstituteofGeochemistry,RussianAcademyofSciencesMoscow,Russia S.COLLEY,N.C.HIGGSSouthamptonOceanographyCentre,EmpressDockSouthampton... 18. @iGlobigerina pachyderma@@ (Ehrenberg) in the shelf-slope sediments of northern Indian Ocean Digital Repository Service at National Institute of Oceanography (India) Setty, M.G.A.P. ~' latitudes in the Arabian Sea and 18~' latitude in the Bay of Bengal is considered and compared with similar occurrence from other oceans of the world. Considering various factors like movement of low salinity, low temperate water masses, mixing and upwelling... 19. Mineralogy of polymetallic nodules and associated sediments from the Central Indian Ocean Basin Digital Repository Service at National Institute of Oceanography (India) Rao, V.P in montmorillonite, chlorite and illite, delta MnO sub(2) is the dominant mineral phase in the nodules of the southern Central Indian Ocean Basin. These nodules have a smooth surface texture, are relatively rich in Fe and Co, and are associated with pelagic clay... 20. Late Eocene to present isotopic (Sr-Nd-Pb) and geochemical evolution of sediments from the Lomonosov Ridge, Arctic Ocean: Implications for continental sources and linkage with the North Atlantic Ocean Science.gov (United States) Stevenson, Ross; Poirier, André; Véron, Alain; Carignan, Jean; Hillaire-Marcel, Claude 2015-09-01 New geochemical and isotopic (Sr, Nd, Pb) data are presented for a composite sedimentary record encompassing the past 50 Ma of history of sedimentation on the Lomonosov Ridge in the Arctic Ocean. The sampled sediments encompass the transition of the Arctic basin from an enclosed anoxic basin to an open and ventilated oxidized ocean basin. The transition from anoxic basin to open ventilated ocean is accompanied by at least three geochemical and isotopic shifts and an increase in elements (e.g., K/Al) controlled by detrital minerals highlighting significant changes in sediment types and sources. The isotopic compositions of the sediments prior to ventilation are more variable but indicate a predominance of older crustal contributions consistent with sources from the Canadian Shield. Following ventilation, the isotopic compositions are more stable and indicate an increased contribution from younger material consistent with Eurasian and Pan-African crustal sources. The waxing and waning of these sources in conjunction with the passage of water through Fram Strait underlines the importance of the exchange of water mass between the Arctic and North Atlantic Oceans. 1. Assessment of gamma radionuclides in sediments from the Atlantic and Pacific Oceans of Guatemala International Nuclear Information System (INIS) Orozco Chilel, R.M. 1997-01-01 The study consisted in the assesment of radioactivity levels in marine sediments of Guatemala due to gamma radionuclides. The samples were taken from 5 selected places, the activity of each sediment was measured by gamma spectrometry using an GE High-Purity detector. The methodology used consisted in to measure the efficiency of the Ge detector, then the calibration for Pb-210 was made. The radioactivity ranges from 1.69 Bq/Kg to 8.68 Bq/Kg for Cs-137, 356.99 Bq/Kg to 431.18 Bq/Kg for K-40, 48.71 Bq/Kg to 59.94 Bq/Kg for Ra-226 and 151.283 Bq/Kg to 224.47 Bq/Kg for Pb-210 2. Nitrogen Cycling in Permeable Sediments: Process-based Models for Streams and the Coastal Ocean OpenAIRE Azizian, Morvarid 2017-01-01 Bioavailable forms of nitrogen, such as nitrate, are necessary for aquatic ecosystem productivity. Excess nitrate in aquatic systems, however, can adversely affect ecosystems and degrade both surface water and groundwater. Some of this excess nitrate can be removed in the sediments that line the bottom of rivers and coastal waters, through the exchange of water between surface water and groundwater (known as hyporheic exchange).Several process-based models have been proposed for estimating ni... 3. Determination of rare earth, major and trace elements in authigenic fraction of Andaman Sea (Northeastern Indian Ocean) sediments by inductively coupled plasma-mass spectrometry Digital Repository Service at National Institute of Oceanography (India) Alagarsamy, R.; You, C.-F.; Nath, B.N.; SijinKumar, A.V. Downcore variation of rare earth elements (REEs) in the authigenic Fe-Mn oxides of a sediment core (covering a record of last approx. 40 kyr) from the Andaman Sea, a part of the Indian Ocean shows distinctive positive Ce and Eu anomalies... 4. Nature, distribution and origin of clay minerals in grain size fractions of sediments from manganese nodule field, Central Indian Ocean Basin Digital Repository Service at National Institute of Oceanography (India) Rao, V.P.; Nath, B.N. DT, IR and X-ray diffraction analyses have been carried out on 3 grain size fractions (1, 1-2 and 2-4 mu m) of sediments from the Central Indian Ocean Basin. Results indicate that there are 2 smectite minerals (montmorillonite and Fe... 5. Distribution of foraminifera and calcareous nannoplankton in quaternary sediments of the Eastern Angola Basin in response to climatic and oceanic fluctuations NARCIS (Netherlands) Zachariasse, W.J.; Schmidt, R.R.; Leeuwen, R.J.W. van 1984-01-01 The impact of the Zaire River on the oceanic environment is clearly illustrated in the surface sediments by anomalously high carbonate dissolution rates over a large area off the river mouth. This anomaly results from the high supply of terrestrial organic matter brought into the Angola Basin by the 6. Quantitative and phylogenetic study of the Deep Sea Archaeal Group in sediments of the arctic mid-ocean spreading ridge Directory of Open Access Journals (Sweden) Steffen Leth eJørgensen 2013-10-01 Full Text Available In marine sediments archaea often constitute a considerable part of the microbial community, of which the Deep Sea Archaeal Group (DSAG is one of the most predominant. Despite their high abundance no members from this archaeal group have so far been characterized and thus their metabolism is unknown. Here we show that the relative abundance of DSAG marker genes can be correlated with geochemical parameters, allowing prediction of both the potential electron donors and acceptors of these organisms. We estimated the abundance of 16S rRNA genes from Archaea, Bacteria and DSAG in 52 sediment horizons from two cores collected at the slow-spreading Arctic Mid-Ocean Ridge, using qPCR. The results indicate that members of the DSAG make up the entire archaeal population in certain horizons and constitute up to ~ 50% of the total microbial community. The quantitative data were correlated to 30 different geophysical and geochemical parameters obtained from the same sediment horizons. We observed a significant correlation between the relative abundance of DSAG 16S rRNA genes and the content of organic carbon (p < 0.0001. Further, significant co-variation with iron oxide, and dissolved iron and manganese (all p < 0.0000, indicated a direct or indirect link to iron and manganese cycling. Neither of these parameters correlated with the relative abundance of archaeal or bacterial 16S rRNA genes, nor did any other major electron donor or acceptor measured. Phylogenetic analysis of DSAG 16S rRNA gene sequences reveals three monophyletic lineages with no apparent habitat-specific distribution. In this study we support the hypothesis that members of the DSAG are tightly linked to the content of organic carbon and directly or indirectly involved in the cycling of iron and/or manganese compounds. Further, we provide a molecular tool to assess their abundance in environmental samples and enrichment cultures. 7. Uranium isotopes in rivers, estuaries and adjacent coastal sediments of western India: their weathering, transport and oceanic budget International Nuclear Information System (INIS) Borole, D.V.; Krishnaswami, S.; Somayajulu, B.L.K. 1982-01-01 The two major river systems on the west coast of India, Narbada and Tapti, their estuaries and the coastal Arabian sea sediments have been extensively studied for their uranium concentrations and 234 U/ 238 U activity ratios. The 238 U concentrations in the aqueous phase of these river systems exhibit a strong positive correlation with the sum of the major cations, and with the HCO 3 - ion contents. The abundance ratio of dissolved U to the sum of the major cations in these waters is similar to their ratio in typical crustal rocks. In the estuaries, both 238 U and its great-grand daughter 234 U behave conservatively beyond chlorosities 0.14 g/l. A review of the uranium isotope measurements in river waters yield a discharge weighted-average 238 U concentration of 0.22 μg/l with a 234 U/ 238 U activity ratio of 1.20 +-0.06. The residence time of uranium isotopes in the oceans estimated from the 238 U concentration and the 234 U/ 238 U A.R. of the rivers yield conflicting results; the material balance of uranium isotopes in the marine environment still remains a paradox. If the disparity between the results is real, then an additional 234 U flux of about 0.25 dpm/cm 2 .10 3 yr into the oceans is necessitated. (author) 8. Applying machine learning to global surface ocean and seabed data to reveal the controls on the distribution of deep-sea sediments Science.gov (United States) Dutkiewicz, Adriana; Müller, Dietmar; O'Callaghan, Simon 2017-04-01 World's ocean basins contain a rich and nearly continuous record of environmental fluctuations preserved as different types of deep-sea sediments. The sediments represent the largest carbon sink on Earth and its largest geological deposit. Knowing the controls on the distribution of these sediments is essential for understanding the history of ocean-climate dynamics, including changes in sea-level and ocean circulation, as well as biological perturbations. Indeed, the bulk of deep-sea sediments comprises the remains of planktonic organisms that originate in the photic zone of the global ocean implying a strong connection between the seafloor and the sea surface. Machine-learning techniques are perfectly suited to unravelling these controls as they are able to handle large sets of spatial data and they often outperform traditional spatial analysis approaches. Using a support vector machine algorithm we recently created the first digital map of seafloor lithologies (Dutkiewicz et al., 2015) based on 14,400 surface samples. This map reveals significant deviations in distribution of deep-sea lithologies from hitherto hand-drawn maps based on far fewer data points. It also allows us to explore quantitatively, for the first time, the relationship between oceanographic parameters at the sea surface and lithologies on the seafloor. We subsequently coupled this global point sample dataset of 14,400 seafloor lithologies to bathymetry and oceanographic grids (sea-surface temperature, salinity, dissolved oxygen and dissolved inorganic nutrients) and applied a probabilistic Gaussian process classifier in an exhaustive combinatorial fashion (Dutkiewicz et al., 2016). We focused on five major lithologies (calcareous sediment, diatom ooze, radiolarian ooze, clay and lithogenous sediment) and used a computationally intensive five-fold cross-validation, withholding 20% of the data at each iteration, to assess the predictive performance of the machine learning method. We find that 9. Coccolith distribution patterns in South Atlantic and Southern Ocean surface sediments in relation to environmental gradients DEFF Research Database (Denmark) Boeckel, B.; Baumann, K.-H.; Henrich, R. 2006-01-01 affinities were ascertained. In general, Emiliania huxleyi is the most abundant species of the recent coccolith assemblages in the study region. However, the lower photic zone taxa, composed of Florisphaera profunda and Gladiolithus flabellatus often dominate the assemblages between 20°N and 30°S. If E....... huxleyi is excluded, Calcidiscus leptoporus and F. profunda become the most abundant species, each dominating discrete oceanographic regimes. While F. profunda is very abundant in the sediments underneath warmer, stratified surface waters with a deep nutricline, Calcidiscus leptoporus is encountered... 10. Geochemistry of ferromanganese nodule-sediment pairs from central Indian Ocean Basin Digital Repository Service at National Institute of Oceanography (India) Pattan, J.N.; Parthiban, G. analyses and remaining part of the solution was used for rare earth element separation using Dowex AG 50W-X8 (200-400 mesh) ion exchange resin following the procedure of Jarvis and Jarvis (1985). Major, few trace and rare earth elements were analysed... and precision based on duplicate sample analysis were ± 4 %. For total silica measurements, both nodules and sediments were fused with lithium metaborate in a furnace and clear solutions obtained were analysed with ICP-AES. Accuracy was better than ± 4... 11. Dispersed Volcanic Ash in Sediment Entering NW Pacific Ocean Subduction Zones: Towards a Regional Perspective Science.gov (United States) Scudder, R. P.; Murray, R. W.; Underwood, M.; Kutterolf, S.; Plank, T.; Dyonisius, M.; Arshad, M. A. 2011-12-01 Volcanic ash has long been recognized to be an important component of the global sedimentary system. Ash figures prominently in a number of sedimentary and petrophysical investigations, including how the fluid budget of subducting sediment will be affected by hydration/dehydration reactions. Additionally, many studies focus on discrete ash layers, and how to link their presence with volcanism, climate, arc evolution, biological productivity, and other processes. Less widely recognized is the ash that is mixed into the bulk sediment, or "dispersed" ash. Dispersed ash is quantitatively significant and is an under-utilized source of critical geochemical and tectonic information. Based on geochemical studies of ODP Site 1149, a composite of DSDP Sites 579 & 581, as well as IODP Sites C0011 & C0012 drilled during Expedition 322, we will show the importance of dispersed ash to the Izu-Bonin-Marianas, Kurile-Kamchatka and Nankai subduction zones. Initial geochemical analyses of the bulk sediment, as related to dispersed ash entering these subduction systems are presented here. Geochemical analysis shows that the characteristics of the three sites exhibit some variability consistent with observed lithological variations. For example, the average SiO2/Al2O3 ratios at Site 1149, Site C0011 and Site C0012 average 3.7. The composite of Sites 579 & 581 exhibits a higher average of 4.6. There are contrasts between other key major elemental indicators as well (e.g., Fe2O3). Ternary diagrams such as K2O-Na2O-CaO show that there are at least two distinct geochemical fields with Sites 1149, C0011 and C0012 clustering in one and Sites 579 & 581 in the other. Q-mode Factor Analysis was performed on the bulk sediment chemical data in order to determine the composition of potential end members of these sites. The multivariate statistics indicate that Site 1149 has 3-4 end members, consistent with the results of Scudder et al. (2009, EPSL, v. 284, pp 639), while each of the other sites 12. Hydrothermal petroleum in the sediments of the Andaman Backarc Basin, Indian Ocean Digital Repository Service at National Institute of Oceanography (India) Venkatesan, M.I.; Ruth, E.; Rao, P.S.; Nath, B.N.; Rao, B.R. inthesediments ofthe AndamanBackarc Basin, IndianOcean § M.I.Venkatesan a, *,E. Ruth b ,P.S. Rao c ,B.N. Nath c , B.R. Rao c a Institute of Geophysics and Planetary Physics and NASA Astrobiology Institute, University of California, Los Angeles, CA 90095-1567, USA... b Dept. of Civil and Environmental Engineering, University of California, Los Angeles, CA 90095-1593, USA c National Institute of Oceanography, Dona Paula, Goa 403 004, India Received 1 March 2002; accepted 13 August 2002 Editorial handling by B... 13. Biodiversity of nematode assemblages from deep-sea sediments of the Atacama Slope and Trench (South Pacific Ocean) Science.gov (United States) Gambi, C.; Vanreusel, A.; Danovaro, R. 2003-01-01 Nematode assemblages were investigated (in terms of size spectra, sex ratio, Shannon diversity, trophic structure and diversity, rarefaction statistics, maturity index, taxonomic diversity and taxonomic distinctness) at bathyal and hadal depths (from 1050 to 7800 m) in the deepest trench of the South Pacific Ocean: the Trench of Atacama. This area, characterised by very high concentrations of nutritionally-rich organic matter also at 7800-m depth, displayed characteristics typical of eutrophic systems and revealed high nematode densities (>6000 ind. 10 cm -2). Nematode assemblages from the Atacama Trench displayed a different composition than at bathyal depths. At bathyal depths 95 genera and 119 species were found (Comesomatidae, Cyatholaimidae, Microlaimidae, Desmodoridae and Xyalidae being dominant), whereas in the Atacama Trench only 29 genera and 37 species were encountered (dominated by Monhysteridae, Chromadoridae, Microlaimidae, Oxystominidae and Xyalidae). The genus Monhystera (24.4%) strongly dominated at hadal depths and Neochromadora, and Trileptium were observed only in the Atacama Trench, but not at bathyal depths. A reduction of the mean nematode size (by ca. 67%) was observed between bathyal and hadal depths. Since food availability was not a limiting factor in the Atacama Trench sediments, other causes are likely to be responsible for the reduction of nematode species richness and body size. The presence of a restricted number of families and genera in the Atacama Trench might indicate that hadal sediments limited nematode colonisation. Most of the genera reaching very high densities in Trench sediments (e.g., Monhystera) are opportunistic and were responsible for the significant decrease of the maturity index. The dominance of opportunists, which are known to be characterised by small sizes, might have contributed to the reduced nematode size at hadal depths. Shannon diversity and species richness decreased in hadal water depth and this pattern 14. Directional analysis of the storm surge from Hurricane Sandy 2012, with applications to Charleston, New Orleans, and the Philippines. Science.gov (United States) Drews, Carl; Galarneau, Thomas J 2015-01-01 Hurricane Sandy in late October 2012 drove before it a storm surge that rose to 4.28 meters above mean lower low water at The Battery in lower Manhattan, and flooded the Hugh L. Carey automobile tunnel between Brooklyn and The Battery. This study examines the surge event in New York Harbor using the Weather Research and Forecasting (WRF) atmospheric model and the Coupled-Ocean-Atmosphere-Wave- Sediment Transport/Regional Ocean Modeling System (COAWST/ROMS). We present a new technique using directional analysis to calculate and display maps of a coastline's potential for storm surge; these maps are constructed from wind fields blowing from eight fixed compass directions. This analysis approximates the surge observed during Hurricane Sandy. The directional analysis is then applied to surge events at Charleston, South Carolina, New Orleans, Louisiana, and Tacloban City, the Philippines. Emergency managers could use these directional maps to prepare their cities for an approaching storm, on planning horizons from days to years. 15. Distribution and burial of organic carbon in sediments from the Indian Ocean upwelling region off Java and Sumatra, Indonesia Science.gov (United States) Baumgart, Anne; Jennerjahn, Tim; Mohtadi, Mahyar; Hebbeln, Dierk 2010-03-01 Sediments were sampled and oxygen profiles of the water column were determined in the Indian Ocean off west and south Indonesia in order to obtain information on the production, transformation, and accumulation of organic matter (OM). The stable carbon isotope composition (δ 13C org) in combination with C/N ratios depicts the almost exclusively marine origin of sedimentary organic matter in the entire study area. Maximum concentrations of organic carbon (C org) and nitrogen (N) of 3.0% and 0.31%, respectively, were observed in the northern Mentawai Basin and in the Savu and Lombok basins. Minimum δ 15N values of 3.7‰ were measured in the northern Mentawai Basin, whereas they varied around 5.4‰ at stations outside this region. Minimum bottom water oxygen concentrations of 1.1 mL L -1, corresponding to an oxygen saturation of 16.1%, indicate reduced ventilation of bottom water in the northern Mentawai Basin. This low bottom water oxygen reduces organic matter decomposition, which is demonstrated by the almost unaltered isotopic composition of nitrogen during early diagenesis. Maximum C org accumulation rates (CARs) were measured in the Lombok (10.4 g C m -2 yr -1) and northern Mentawai basins (5.2 g C m -2 yr -1). Upwelling-induced high productivity is responsible for the high CAR off East Java, Lombok, and Savu Basins, while a better OM preservation caused by reduced ventilation contributes to the high CAR observed in the northern Mentawai Basin. The interplay between primary production, remineralisation, and organic carbon burial determines the regional heterogeneity. CAR in the Indian Ocean upwelling region off Indonesia is lower than in the Peru and Chile upwellings, but in the same order of magnitude as in the Arabian Sea, the Benguela, and Gulf of California upwellings, and corresponds to 0.1-7.1% of the global ocean carbon burial. This demonstrates the relevance of the Indian Ocean margin off Indonesia for the global OM burial. 16. Radionuclide distributions in deep-ocean sediment cores. Progress report, 1 October 1976 -- 31 December 1977 International Nuclear Information System (INIS) Bowen, V.T. 1978-04-01 Disruption, in the past year, of the supply of 237 Pu tracer from Oak Ridge caused us to put more of effort into analyses of core samples previously collected, and into data collation, than into the laboratory experiments originally projected. Accompanying this report are two review papers, one for a Congressional Committee and one in press, a report in press of a device for conducting microbiological tracer experiments under controlled atmospheres, and a description of radionuclide distributions in sediments of Atlantic and Pacific solid waste dump sites. Described in the body of the report are experiments relating the time course of association of 237 Pu tracer with diatoms (dead or alive) or glass beads, to the constitution of the media, the history of the cells, or the presence of exometabolites. Also described are studies of the differential removal of 239 240 Pu, 241 Am, and 137 Cs from coastal seawater currents contaminated by waste released from a fuel-reprocessing facility 17. Rebuilding natural coastlines after sediment mining: the example of the Brittany coasts (English Channel and Atlantic Ocean). Science.gov (United States) Regnauld, Herve 2016-04-01 Rebuilding natural coastlines after sediment mining: the example of the Brittany coasts (English Channel and Atlantic Ocean). H.Regnauld (1) , J.N. Proust (2) and H.Mahmoud (1) (1) University of Rennes 2, (2) CNRS-University of Rennes 1, France A large part of the coasts of Brittany (western France) have been very heavily impacted by sand mining for the building of military equipments and of a large tidal power station. In some places more then 90 % of the sediment has been extracted during the late 40ies up to the 60ies. The mined site were all sink sites, were sediment had been accumulating for centuries. After the sand and or gravel extraction was stopped the coastal sites were largely used for tourism and most of the eroded dune fields were turned into car parks. Storms produced large floods inland as most of the gravel or sand barrier didn't exist any more. Some local outcrops of inherited Holocene periglacial material with archaeological remains were eroded, some disappeared. During the 80ies a complete shift in planning policies took place and these sites were progressively changed into nature preserves. The aim was to make them behave in a "natural" way again. The "natural" behaviour was intended in a very precise way: barriers should be able to withstand storms again and to protect inland fields from floods. In order to allow for dune re building wooden fences were erected and marram grass was artificially planted. As, from a sedimentological point of view, these sites were sink sites, accumulation was rather rapid (up to 0.25m a year behind wooden fences) and new barrier began to build. The only problem is that they did not always build-up exactly in the same place or with the same material. Some parts of the coasts were left "unprotected" by these new barriers, ancient exposed sites became protected. Today the system as a whole may be considered as having been able to reach some level of equilibrium with the average wave conditions. It has been able to 18. Early Paleogene variations in the calcite compensation depth: new constraints using old borehole sediments from across Ninetyeast Ridge, central Indian Ocean Science.gov (United States) Slotnick, B. S.; Lauretano, V.; Backman, J.; Dickens, G. R.; Sluijs, A.; Lourens, L. 2015-03-01 Major variations in global carbon cycling occurred between 62 and 48 Ma, and these very likely related to changes in the total carbon inventory of the ocean-atmosphere system. Based on carbon cycle theory, variations in the mass of the ocean carbon should be reflected in contemporaneous global ocean carbonate accumulation on the seafloor and, thereby, the depth of the calcite compensation depth (CCD). To better constrain the cause and magnitude of these changes, the community needs early Paleogene carbon isotope and carbonate accumulation records from widely separated deep-sea sediment sections, especially including the Indian Ocean. Several CCD reconstructions for this time interval have been generated using scientific drill sites in the Atlantic and Pacific oceans; however, corresponding information from the Indian Ocean has been extremely limited. To assess the depth of the CCD and the potential for renewed scientific drilling of Paleogene sequences in the Indian Ocean, we examine lithologic, nannofossil, carbon isotope, and carbonate content records for late Paleocene - early Eocene sediments recovered at three sites spanning Ninetyeast Ridge: Deep Sea Drilling Project (DSDP) Sites 213 (deep, east), 214 (shallow, central), and 215 (deep, west). The disturbed, discontinuous sediment sections are not ideal, because they were recovered in single holes using rotary coring methods, but remain the best Paleogene sediments available from the central Indian Ocean. The δ13C records at Sites 213 and 215 are similar to those generated at several locations in the Atlantic and Pacific, including the prominent high in δ13C across the Paleocene carbon isotope maximum (PCIM) at Site 215, and the prominent low in δ13C across the early Eocene Climatic Optimum (EECO) at both Site 213 and Site 215. The Paleocene-Eocene thermal maximum (PETM) and the K/X event are found at Site 213 but not at Site 215, presumably because of coring gaps. Carbonate content at both Sites 213 and 19. Increasing coastal slump activity impacts the release of sediment and organic carbon into the Arctic Ocean Directory of Open Access Journals (Sweden) J. L. Ramage 2018-03-01 Full Text Available Retrogressive thaw slumps (RTSs are among the most active thermokarst landforms in the Arctic and deliver a large amount of material to the Arctic Ocean. However, their contribution to the organic carbon (OC budget is unknown. We provide the first estimate of the contribution of RTSs to the nearshore OC budget of the Yukon Coast, Canada, and describe the evolution of coastal RTSs between 1952 and 2011 in this area. We (1 describe the evolution of RTSs between 1952 and 2011; (2 calculate the volume of eroded material and stocks of OC mobilized through slumping, including soil organic carbon (SOC and dissolved organic carbon (DOC; and (3 estimate the OC fluxes mobilized through slumping between 1972 and 2011. We identified RTSs using high-resolution satellite imagery from 2011 and geocoded aerial photographs from 1952 and 1972. To estimate the volume of eroded material, we applied spline interpolation on an airborne lidar dataset acquired in July 2013. We inferred the stocks of mobilized SOC and DOC from existing related literature. Our results show a 73 % increase in the number of RTSs and 14 % areal expansion between 1952 and 2011. In the study area, RTSs displaced at least 16.6×106 m3 of material, 53 % of which was ice, and mobilized 145.9×106 kg of OC. Between 1972 and 2011, 49 RTSs displaced 8.6×103 m3 yr−1 of material, adding 0.6 % to the OC flux released by coastal retreat along the Yukon Coast. Our results show that the contribution of RTSs to the nearshore OC budget is non-negligible and should be included when estimating the quantity of OC released from the Arctic coast to the ocean. 20. Can porosity affect the hyperspectral signature of sandy landscapes? Science.gov (United States) Baranoski, Gladimir V. G.; Kimmel, Bradley W. 2017-10-01 Porosity is a fundamental property of sand deposits found in a wide range of landscapes, from beaches to dune fields. As a primary determinant of the density and permeability of sediments, it represents a central element in geophysical studies involving basin modeling and coastal erosion as well as geoaccoustics and geochemical investigations aiming at the understanding of sediment transport and water diffusion properties of sandy landscapes. These applications highlight the importance of obtaining reliable porosity estimations, which remains an elusive task, notably through remote sensing. In this work, we aim to contribute to the strengthening of the knowledge basis required for the development of new technologies for the remote monitoring of environmentally-triggered changes in sandy landscapes. Accordingly, we employ an in silico investigation approach to assess the effects of porosity variations on the reflectance of sandy landscapes in the visible and near-infrared spectral domains. More specifically, we perform predictive computer simulations using SPLITS, a hyperspectral light transport model for particulate materials that takes into account actual sand characterization data. To the best of our knowledge, this work represents the first comprehensive investigation relating porosity to the reflectance responses of sandy landscapes. Our findings indicate that the putative dependence of these responses on porosity may be considerably less pronounced than its dependence on other properties such as grain size and shape. Hence, future initiatives for the remote quantification of porosity will likely require reflectance sensors with a high degree of sensitivity. 1. Nematode communities in sediments of the Kermadec Trench, Southwest Pacific Ocean Science.gov (United States) Leduc, Daniel; Rowden, Ashley A. 2018-04-01 Hadal trenches are characterized by environmental conditions not found in any other deep-sea environment, such as steep topography and periodic disturbance by turbidity flows, which are likely responsible for the distinct nature of benthic communities of hadal trenches relative to those of the abyssal plain. Nematodes are the most abundant metazoans in the deep-sea benthos, but it is not yet clear if different trenches host distinct nematode communities, and no data are yet available on the communities of most trenches, including the Kermadec Trench in the Southwest Pacific. Quantitative core samples from the seafloor of the Kermadec Trench were recently obtained from four sites at 6000-9000 m depth which allowed for analyses of meiofauna, and nematodes in particular, for the first time. Nematode community and trophic structure was also compared with other trenches using published data. There was a bathymetric gradient in meiofauna abundance, biomass, and community structure within the Kermadec Trench, but patterns for species richness were ambiguous depending on which metric was used. There was a change in community structure from shallow to deep sites, as well as a consistent change in community structure from the upper sediment layers to the deeper sediment layers across the four sites. These patterns are most likely explained by variation in food availability within the trench, and related to trench topography. Together, deposit and microbial feeders represented 48-92% of total nematode abundance in the samples, which suggests that fine organic detritus and bacteria are major food sources. The relatively high abundance of epigrowth feeders at the 6000 and 9000 m sites (38% and 31%, respectively) indicates that relatively freshly settled microalgal cells represent another important food source at these sites. We found a significant difference in species community structure between the Kermadec and Tonga trenches, which was due to both the presence/absence of 2. Exopolysaccharide production by a marine Pseudoalteromonas sp. strain isolated from Madeira Archipelago ocean sediments. Science.gov (United States) Roca, Christophe; Lehmann, Mareen; Torres, Cristiana A V; Baptista, Sílvia; Gaudêncio, Susana P; Freitas, Filomena; Reis, Maria A M 2016-06-25 Exopolysaccharides (EPS) are polymers excreted by some microorganisms with interesting properties and used in many industrial applications. A new Pseudoalteromonas sp. strain, MD12-642, was isolated from marine sediments and cultivated in bioreactor in saline culture medium containing glucose as carbon source. Its ability to produce EPS under saline conditions was demonstrated reaching an EPS production of 4.4g/L within 17hours of cultivation, corresponding to a volumetric productivity of 0.25g/Lh, the highest value so far obtained for Pseudoalteromonas sp. strains. The compositional analysis of the EPS revealed the presence of galacturonic acid (41-42mol%), glucuronic acid (25-26mol%), rhamnose (16-22mol%) and glucosamine (12-16mol%) sugar residues. The polymer presents a high molecular weight (above 1000kDa). These results encourage the biotechnological exploitation of strain MD12-642 for the production of valuable EPS with unique composition, using saline by-products/wastes as feedstocks. Copyright © 2016 Elsevier B.V. All rights reserved. 3. Sedimentation in a Submarine Seamount Apron at Site U1431, International Ocean Discovery Program Expedition 349, South China Sea Science.gov (United States) Dadd, K. A.; Clift, P. D.; Hyun, S.; Jiang, T.; Liu, Z. 2014-12-01 International Ocean Discovery Program (IODP) Expedition 349 Site U1431 is located near the relict spreading ridge in the East Subbasin of the South China Sea. Holes at this site were drilled close to seamounts and intersected the volcaniclastic apron. Volcaniclastic breccia and sandstone at Site U1431 are dated as late middle Miocene to early late Miocene (~8-13 Ma), suggesting a 5 m.y. duration of seamount volcanism. The apron is approximately 200 m thick and is sandwiched between non-volcaniclastic units that represent the background sedimentation. These comprise dark greenish gray clay, silt, and nannofossil ooze interpreted as turbidite and hemipelagic deposits that accumulated at abyssal water depths. At its base, the seamount sequence begins with dark greenish gray sandstone, siltstone, and claystone in upward fining sequences interpreted as turbidites intercalated with minor intervals of volcaniclastic breccia. Upsection the number and thickness of breccia layers increases with some beds up to 4.8 m and possibly 14.5 m thick. The breccia is typically massive, ungraded, and poorly sorted with angular to subangular basaltic clasts, as well as minor reworked subrounded calcareous mudstone, mudstone, and sandstone clasts. Basaltic clasts include nonvesicular aphyric basalt, sparsely vesicular aphyric basalt, highly vesicular aphyric basalt, and nonvesicular glassy basalt. Mudstone clasts are clay rich and contain foraminifer fossils. The matrix comprises up to 40% of the breccia beds and is a mix of clay, finer grained altered basalt clasts, and mafic vitroclasts with rare foraminifer fossils. Some layers have calcite cement between clasts. Volcaniclastic sandstone and claystone cycles interbedded with the breccia layers have current ripples and parallel laminations indicative of high-energy flow conditions during sedimentation. The breccia beds were most likely deposited as a series of debris flows or grain flows. This interpretation is supported by their 4. Distribution of PAHs and the PAH-degrading bacteria in the deep-sea sediments of the high-latitude Arctic Ocean Science.gov (United States) Dong, C.; Bai, X.; Sheng, H.; Jiao, L.; Zhou, H.; Shao, Z. 2015-04-01 Polycyclic aromatic hydrocarbons (PAHs) are common organic pollutants that can be transferred long distances and tend to accumulate in marine sediments. However, less is known regarding the distribution of PAHs and their natural bioattenuation in the open sea, especially the Arctic Ocean. In this report, sediment samples were collected at four sites from the Chukchi Plateau to the Makarov Basin in the summer of 2010. PAH compositions and total concentrations were examined with GC-MS. The concentrations of 16 EPA-priority PAHs varied from 2.0 to 41.6 ng g-1 dry weight and decreased with sediment depth and movement from the southern to the northern sites. Among the targeted PAHs, phenanthrene was relatively abundant in all sediments. The 16S rRNA gene of the total environmental DNA was analyzed with Illumina high-throughput sequencing (IHTS) to determine the diversity of bacteria involved in PAH degradation in situ. The potential degraders including Cycloclasticus, Pseudomonas, Halomonas, Pseudoalteromonas, Marinomonas, Bacillus, Dietzia, Colwellia, Acinetobacter, Alcanivorax, Salinisphaera and Shewanella, with Dietzia as the most abundant, occurred in all sediment samples. Meanwhile, enrichment with PAHs was initiated onboard and transferred to the laboratory for further enrichment and to obtain the degrading consortia. Most of the abovementioned bacteria in addition to Hahella, Oleispira, Oceanobacter and Hyphomonas occurred alternately as predominant members in the enrichment cultures from different sediments based on IHTS and PCR-DGGE analysis. To reconfirm their role in PAH degradation, 40 different bacteria were isolated and characterized, among which Cycloclasticus Pseudomonas showed the best degradation capability under low temperatures. Taken together, PAHs and PAH-degrading bacteria were widespread in the deep-sea sediments of the Arctic Ocean. We propose that bacteria of Cycloclasticus, Pseudomonas, Pseudoalteromonas, Halomonas, Marinomonas and Dietzia may 5. How hydrological factors initiate instability in a model sandy slope OpenAIRE Terajima, Tomomi; Miyahira, Ei-ichiro; Miyajima, Hiroyuki; Ochiai, Hirotaka; Hattori, Katsumi 2013-01-01 Knowledge of the mechanisms of rain-induced shallow landslides can improve the prediction of their occurrence and mitigate subsequent sediment disasters. Here, we examine an artificial slope's subsurface hydrology and propose a new slope stability analysis that includes seepage force and the down-slope transfer of excess shear forces. We measured pore water pressure and volumetric water content immediately prior to a shallow landslide on an artificial sandy slope of 32°: The direction of the ... 6. DEEP BIOSPHERE. Exploring deep microbial life in coal-bearing sediment down to ~2.5 km below the ocean floor. Science.gov (United States) Inagaki, F; Hinrichs, K-U; Kubo, Y; Bowles, M W; Heuer, V B; Hong, W-L; Hoshino, T; Ijiri, A; Imachi, H; Ito, M; Kaneko, M; Lever, M A; Lin, Y-S; Methé, B A; Morita, S; Morono, Y; Tanikawa, W; Bihan, M; Bowden, S A; Elvert, M; Glombitza, C; Gross, D; Harrington, G J; Hori, T; Li, K; Limmer, D; Liu, C-H; Murayama, M; Ohkouchi, N; Ono, S; Park, Y-S; Phillips, S C; Prieto-Mollar, X; Purkey, M; Riedinger, N; Sanada, Y; Sauvage, J; Snyder, G; Susilawati, R; Takano, Y; Tasumi, E; Terada, T; Tomaru, H; Trembath-Reichert, E; Wang, D T; Yamada, Y 2015-07-24 Microbial life inhabits deeply buried marine sediments, but the extent of this vast ecosystem remains poorly constrained. Here we provide evidence for the existence of microbial communities in ~40° to 60°C sediment associated with lignite coal beds at ~1.5 to 2.5 km below the seafloor in the Pacific Ocean off Japan. Microbial methanogenesis was indicated by the isotopic compositions of methane and carbon dioxide, biomarkers, cultivation data, and gas compositions. Concentrations of indigenous microbial cells below 1.5 km ranged from <10 to ~10(4) cells cm(-3). Peak concentrations occurred in lignite layers, where communities differed markedly from shallower subseafloor communities and instead resembled organotrophic communities in forest soils. This suggests that terrigenous sediments retain indigenous community members tens of millions of years after burial in the seabed. Copyright © 2015, American Association for the Advancement of Science. 7. Pleistocene coastal sedimentation in the north cliffs of Colonia del Sacramento International Nuclear Information System (INIS) Goso, C.; Perea, D.; Corona, A.; Mesa, V. 2012-01-01 This work is about the cliffs and the sucession of sandy and gravelly sediments in the north of Colonia city. The results obtained by thermoluminescence dating in sandy samples belong to the Quaternary period 8. Sandy PMO Disaster Relief Appropriations Act of 2013 Financial Data Data.gov (United States) Department of Homeland Security — Sandy PMO: Disaster Relief Appropriations Act of 2013 (Sandy Supplemental Bill) Financial Data. This is the Sandy Supplemental Quarterly Financial Datasets that are... 9. Can we constrain postglacial sedimentation in the western Arctic Ocean by ramped pyrolysis 14C? A case study from the Chukchi-Alaskan margin. Science.gov (United States) Suzuki, K.; Yamamoto, M.; Rosenheim, B. E.; Omori, T.; Polyak, L.; Nam, S. I. 2017-12-01 The Arctic Ocean underwent dramatic climate changes in the past. Variations in sea-ice extent and ocean current system in the Arctic cause changes in surface albedo and deep water formation, which have global climatic implications. However, Arctic paleoceanographic studies are lagging behind the other oceans due largely to chronostratigraphic difficulties. One of the reasons for this is a scant presence of material suitable for 14C dating in large areas of the Arctic seafloor. To enable improved age constraints for sediments impoverished in datable material, we apply ramped pyrolysis 14C method (Ramped PyrOx 14C, Rosenheim et al., 2008) to sedimentary records from the Chukchi-Alaska margin recovering Holocene to late-glacial deposits. Samples were divided into five fraction products by gradual heating sedimentary organic carbon from ambient laboratory temperature to 1000°C. The thermographs show a trimodal pattern of organic matter decomposition over temperature, and we consider that CO2 generated at the lowest temperature range was derived from autochthonous organic carbon contemporaneous with sediment deposition, similar to studies in the Antarctic margin and elsewhere. For verification of results, some of the samples treated for ramped pyrolysis 14C were taken from intervals dated earlier by AMS 14C using bivalve mollusks. Ultimately, our results allow a new appraisal of deglacial to Holocene deposition at the Chukchi-Alaska margin with potential to be applied to other regions of the Arctic Ocean. 10. Study of the geochemistry of the cosmogenic isotope 10Be and the stable isotope 9Be in oceanic environment. Application to marine sediment dating International Nuclear Information System (INIS) Bourles, D. 1988-01-01 The radioisotope 10 Be is formed by spallation reactions in the atmosphere. It is transferred to the oceans in soluble form by precipitation and dry deposition. The stable isotope 9 Be comes from erosion of soils and rocks in the Earth's crust. It is transported by wind and rivers and introduced to the oceans probably in both soluble and insoluble form. 9 Be was measured by atomic absorption spectrometry and 10 Be by A.M.S. The distribution of 10 Be and 9 Be between each phase extracted and the 10 Be/ 9 Be ratios associated were studied in recent marine sediments from Atlantic, Pacific, Indian oceans and Mediterranean sea. The results show that for beryllium the two essential constituent phases of marine sediments are: - the authigenic phase incorporates the soluble beryllium and the detritic phase. The 10 Be/ 9 Be ratio associated with the authigenic fraction varies with location. This suggests that the residence time of beryllium in the soluble phase is lower or comparable to the mixing time of the oceans. The evolution with time of the authigenic 10 Be/ 9 Be ratio is discussed [fr 11. Geochemical fractionation of Ni, Cu and Pb in the deep sea sediments from the Central Indian Ocean Basin: An insight into the mechanism of metal enrichment in sediment Digital Repository Service at National Institute of Oceanography (India) Sensarma, S.; Chakraborty, P.; Banerjee, R.; Mukhopadhyay, S. speciation study suggests that Fe–Mn oxyhydroxide phase was the major binding phase for Ni, Cu and Pb in the sediments. The second highest concentrations of all these metals were present within the structure of the sediments. Easily reducible oxide phase... 12. Nature and distribution of manganese nodules from three sediment domains of the Central Indian Basin, Indian Ocean Digital Repository Service at National Institute of Oceanography (India) Banerjee, R.; Mukhopadhyay, R. from northern and central region, dominatEd. by terrigenous and terrigenous-siliceous mixed sediments, respectively. Effects of lysocline and sediment diagenesis are envisaged for trace metal enrichment in rough nodules of the southern region. Influence... 13. 2013-2014 U.S. Geological Survey CMGP LiDAR: Post Sandy (New York City) Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — TASK NAME: USGS New York CMGP Sandy Lidar 0.7 Meter NPS LIDAR lidar Data Acquisition and Processing Production Task USGS Contract No. G10PC00057 Task Order No.... 14. Macroinfauna and sediment data from swash zones of sandy beaches along the SE Gulf of Mexico and SE Florida coast, 2010-2011 in response to the Deepwater Horizon Oil Spill (NODC Accession 0083190) Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — Sampling for macroinfauna from swash zones of beaches along the SE Gulf of Mexico and SE coast of Florida was conducted from May 2010- July 2011. At each site,... 15. Extensive lake sediment coring survey on Sub-Antarctic Indian Ocean Kerguelen Archipelago (French Austral and Antarctic Lands) Science.gov (United States) Arnaud, Fabien; Fanget, Bernard; Malet, Emmanuel; Poulenard, Jérôme; Støren, Eivind; Leloup, Anouk; Bakke, Jostein; Sabatier, Pierre 2016-04-01 Recent paleo-studies revealed climatic southern high latitude climate evolution patterns that are crucial to understand the global climate evolution(1,2). Among others the strength and north-south shifts of westerlies wind appeared to be a key parameter(3). However, virtually no lands are located south of the 45th South parallel between Southern Georgia (60°W) and New Zealand (170°E) precluding the establishment of paleoclimate records of past westerlies dynamics. Located around 50°S and 70°E, lost in the middle of the sub-Antarctic Indian Ocean, Kerguelen archipelago is a major, geomorphologically complex, land-mass that is covered by hundreds lakes of various sizes. It hence offers a unique opportunity to reconstruct past climate and environment dynamics in a region where virtually nothing is known about it, except the remarkable recent reconstructions based on a Lateglacial peatbog sequence(4). During the 2014-2015 austral summer, a French-Norwegian team led the very first extensive lake sediment coring survey on Kerguelen Archipelago under the umbrella of the PALAS program supported by the French Polar Institute (IPEV). Two main areas were investigated: i) the southwest of the mainland, so-called Golfe du Morbihan, where glaciers are currently absent and ii) the northernmost Kerguelen mainland peninsula so-called Loranchet, where cirque glaciers are still present. This double-target strategy aims at reconstructing various independent indirect records of precipitation (glacier advance, flood dynamics) and wind speed (marine spray chemical species, wind-borne terrigenous input) to tackle the Holocene climate variability. Despite particularly harsh climate conditions and difficult logistics matters, we were able to core 6 lake sediment sites: 5 in Golfe du Morbihan and one in Loranchet peninsula. Among them two sequences taken in the 4km-long Lake Armor using a UWITEC re-entry piston coring system by 20 and 100m water-depth (6 and 7m-long, respectively). One 16. The influence of hypercapnia and the infaunal brittlestar Amphiura filiformis on sediment nutrient flux – will ocean acidification affect nutrient exchange? Directory of Open Access Journals (Sweden) S. Widdicombe 2009-10-01 Full Text Available Rising levels of atmospheric carbon dioxide and the concomitant increased uptake of this by the oceans is resulting in hypercapnia-related reduction of ocean pH. Research focussed on the direct effects of these physicochemical changes on marine invertebrates has begun to improve our understanding of impacts at the level of individual physiologies. However, CO2-related impairment of organisms' contribution to ecological or ecosystem processes has barely been addressed. The burrowing ophiuroid Amphiura filiformis, which has a physiology that makes it susceptible to reduced pH, plays a key role in sediment nutrient cycling by mixing and irrigating the sediment, a process known as bioturbation. Here we investigate the role of A. filiformis in modifying nutrient flux rates across the sediment-water boundary and the impact of CO2- related acidification on this process. A 40 day exposure study was conducted under predicted pH scenarios from the years 2100 (pH 7.7 and 2300 (pH 7.3, plus an additional treatment of pH 6.8. This study demonstrated strong relationships between A. filiformis density and cycling of some nutrients; activity increases the sediment uptake of phosphate and the release of nitrite and nitrate. No relationship between A. filiformis density and the flux of ammonium or silicate were observed. Results also indicated that, within the timescale of this experiment, effects at the individual bioturbator level appear not to translate into reduced ecosystem influence. However, long term survival of key bioturbating species is far from assured and changes in both bioturbation and microbial processes could alter key biogeochemical processes in future, more acidic oceans. 17. Analysis of Fluvial Sediment Discharges into Kubanni Reservoir ... African Journals Online (AJOL) User The predominant sandy-clay sediment in the reservoir has an estimated total sediment load ... NIGERIAN JOURNAL OF TECHNOLOGY, VOL. 29 NO 2, JUNE ... the upper limit of application is 1-5gl !1 ... Laursen, Modified Einstein Procedure,. 18. Buried in time: Culturable fungi in a deep-sea sediment core from the Chagos Trench, Indian Ocean Digital Repository Service at National Institute of Oceanography (India) Raghukumar, C.; Raghukumar, S.; Sheelu, G.; Gupta, S.M.; Nath, B.N.; Rao, B.R. of sediments was determined by the ?Karbonat-Bombe? method (M?ller and Gastner, 1971). The DBD is one among the several physical properties of sediment including porosity and is inversely related to porosity as shown by the equation of Garg (1987... Karbonat-Bombe?, a simple device for the determination of carbonate content in sediments, soils and other material. Neues Jahrbuch Minearalogie, 10, 466-469. Mueller, V., Sengbusch, P. V., 1983. Visualization of aquatic fungi (Chytridiales... 19. Superstorm Sandy-related Morphologic and Sedimentologic Changes in an Estuarine System: Barnegat Bay-Little Egg Harbor Estuary, New Jersey Science.gov (United States) Miselis, J. L.; Ganju, N. K.; Navoy, A.; Nicholson, R.; Andrews, B. 2013-12-01 Despite the well-recognized ecological importance of back-barrier estuaries, the role of storms in their geomorphic evolution is poorly understood. Moreover, the focus of storm impact assessments is often the ocean shorelines of barrier islands rather than the exchange of sediment from barrier to estuary. In order to better understand and ultimately predict short-term morphologic and sedimentologic changes in coastal systems, a comprehensive research approach is required but is often difficult to achieve given the diversity of data required. An opportunity to use such an approach in assessing the storm-response of a barrier-estuary system occurred when Superstorm Sandy made landfall near Atlantic City, New Jersey on 29 October 2012. Since 2011, the US Geological Survey has been investigating water circulation and water-quality degradation in Barnegat Bay-Little Egg Harbor (BBLEH) Estuary, the southern end of which is approximately 25 kilometers north of the landfall location. This effort includes shallow-water geophysical surveys to map the bathymetry and sediment distribution within BBLEH, airborne topo-bathymetric lidar surveys for mapping the shallow shoals that border the estuary, and sediment sampling, all of which have provided a recent picture of the pre-storm estuarine geomorphology. We combined these pre-storm data with similar post-storm data from the estuary and pre- and post-storm topographic data from the ocean shoreline of the barrier island to begin to understand the response of the barrier-estuary system. Breaches in the barrier island resulted in water exchange between the estuary and the ocean, briefly reducing residence times in the northern part of the estuary until the breaches were closed. Few morphologic changes in water depths greater than 1.5 m were noted. However, morphologic changes observed in shallower depths along the eastern shoreline of the estuary are likely related to overwash processes. In general, surficial estuarine sediments 20. Sediment failures within the Peach Slide (Barra Fan, NE Atlantic Ocean) and relation to the history of the British-Irish Ice Sheet Science.gov (United States) Owen, Matthew J.; Maslin, Mark A.; Day, Simon J.; Long, David 2018-05-01 The Peach Slide is the largest known submarine mass movement on the British continental margin and is situated on the northern flank of the glacigenic Barra Fan. The Barra Fan is located on the northwest British continental margin and is subject to cyclonic ocean circulation, with distinct differences between the circulation during stadial and inter-stadial periods. The fan has experienced growth since continental uplift during the mid-Pliocene, with the majority of sediments deposited during the Pleistocene when the fan was a major depocentre for the British-Irish Ice Sheet (BIIS). Surface and shallow sub-surface morphology of the fan has been mapped using newly digitised archival paper pinger and deep towed boomer sub-bottom profile records, side scan sonar and multibeam echosounder data. This process has allowed the interpretation and mapping of a number of different seismic facies, including: contourites, hemipelagites and debrites. Development of a radiocarbon based age model for the seismic stratigraphy constrains the occurrence of two periods of slope failure: the first at circa 21 ka cal BP, shortly after the BIIS's maximum advance during the deglaciation of the Hebrides Ice Stream; and the second between 12 and 11 ka cal BP at the termination of the Younger Dryas stadial. Comparison with other mass movement events, which have similar geological and oceanographic settings, suggests that important roles are played by contouritic and glacigenic sedimentation, deposited in inter-stadial and stadial periods respectively when different thermohaline regimes and sediment sources dominate. The effect of this switch in sedimentation is to rapidly deposit thick, low permeability, glacigenic layers above contourite and hemipelagite units. This process potentially produced excess pore pressure in the fan sediments and would have increased the likelihood of sediment failure via reduced shear strength and potential liquefaction. 1. Processes influencing the transport and fate of contaminated sediments in the coastal ocean: Boston Harbor and Massachusetts Bay Science.gov (United States) Alexander, P. Soupy; Baldwin, Sandra M.; Blackwood, Dann S.; Borden, Jonathan; Casso, Michael A.; Crusius, John; Goudreau, Joanne; Kalnejais, Linda H.; Lamothe, Paul J.; Martin, William R.; Martini, Marinna A.; Rendigs, Richard R.; Sayles, Frederick L.; Signell, Richard P.; Valentine, Page C.; Warner, John C.; Bothner, Michael H.; Butman, Bradford 2007-01-01 Most of the major urban centers of the United States including Boston, New York, Washington, Chicago, New Orleans, Miami, Los Angeles, San Francisco, and Seattle—are on a coast (fig. 1.1). All of these cities discharge treated sewage effluent into adjacent waters. In 2000, 74 percent of the U.S. population lived within 200 kilometers (km) of the coast. Between 1980 and 2002, the population density in coastal communities increased approximately 4.5 times faster than in noncoastal areas of the U.S. (Perkins, 2004). More people generate larger volumes of wastes, increase the demands on wastewater treatment, expand the area of impervious land surfaces, and use more vehicles that contribute contaminants to street runoff. According to the National Coastal Condition Report II (U.S. Environmental Protection Agency, 2005a), on the basis of coastal habitat, water and sediment quality, benthic index, and fish tissue, the overall national coastal condition is only poor to fair and the overall coastal condition in the highly populated Northeast is poor. Scientific information helps managers to prioritize and regulate coastal-ocean uses that include recreation, commercial fishing, transportation, waste disposal, and critical habitat for marine organisms. These uses are often in conflict with each other and with environmental concerns. Developing a strategy for managing competing uses while maintaining sustainability of coastal resources requires scientific understanding of how the coastal ocean system behaves and how it responds to anthropogenic influences. This report provides a summary of a multidisciplinary research program designed to improve our understanding of the transport and fate of contaminants in Massachusetts coastal waters. Massachusetts Bay and Boston Harbor have been a focus of U.S. Geological Survey (USGS) research because they provide a diverse geographic setting for developing a scientific understanding of the geology, geochemistry, and oceanography of 2. Preliminary study on detection sediment contamination in soil affected by the Indian Ocean giant tsunami 2004 in Aceh, Indonesia using laser-induced breakdown spectroscopy (LIBS) Energy Technology Data Exchange (ETDEWEB) Idris, Nasrullah, E-mail: [email protected] [Department of Physics, Faculty of Mathematics and Natural Sciences, Syiah Kuala University, Jl. Syech Abdurrauf No. 3 Darussalam, 23111 Banda Aceh, Aceh (Indonesia); Ramli, Muliadi [Department of Chemistry, Faculty of Mathematics and Natural Sciences, Syiah Kuala University, Jl. Syech Abdurrauf No. 3 Darussalam, 23111 Banda Aceh, Aceh (Indonesia); Hedwig, Rinda; Lie, Zener Sukra [Department of Computer Engineering, Bina Nusantara University, 9 K. H. Syahdan, Jakarta 14810 (Indonesia); Kurniawan, Koo Hendrik [Research Center of Maju Makmur Mandiri Foundation, 40 Srengseng Raya, Kembangan, Jakarta Barat 11630, Jakarta (Indonesia) 2016-03-11 This work is intended to asses the capability of LIBS for the detection of the tsunami sediment contamination in soil. LIBS apparatus used in this work consist of a laser system and an optical multichannel analyzer (OMA) system. The soil sample was collected from in Banda Aceh City, Aceh, Indonesia, the most affected region by the giant Indian Ocean tsunami 2004. The laser beam was focused onto surface of the soil pellet using a focusing lens to produce luminous plasma. The experiment was conducted under air as surrounding gas at 1 atmosphere. The emission spectral lines from the plasma were detected by the OMA system. It was found that metal including heavy metals can surely be detected, thus implying the potent of LIBS technique as a fast screening tools of tsunami sediment contamination. 3. Numerical modeling of salt marsh morphological change induced by Hurricane Sandy Science.gov (United States) Hu, Kelin; Chen, Qin; Wang, Hongqing; Hartig, Ellen K.; Orton, Philip M. 2018-01-01 The salt marshes of Jamaica Bay serve as a recreational outlet for New York City residents, mitigate wave impacts during coastal storms, and provide habitat for critical wildlife species. Hurricanes have been recognized as one of the critical drivers of coastal wetland morphology due to their effects on hydrodynamics and sediment transport, deposition, and erosion processes. In this study, the Delft3D modeling suite was utilized to examine the effects of Hurricane Sandy (2012) on salt marsh morphology in Jamaica Bay. Observed marsh elevation change and accretion from rod Surface Elevation Tables and feldspar Marker Horizons (SET-MH) and hydrodynamic measurements during Hurricane Sandy were used to calibrate and validate the wind-waves-surge-sediment transport-morphology coupled model. The model results agreed well with in situ field measurements. The validated model was then used to detect salt marsh morphological change due to Sandy across Jamaica Bay. Model results indicate that the island-wide morphological changes in the bay's salt marshes due to Sandy were in the range of −30 mm (erosion) to +15 mm (deposition), and spatially complex and heterogeneous. The storm generated paired deposition and erosion patches at local scales. Salt marshes inside the west section of the bay showed erosion overall while marshes inside the east section showed deposition from Sandy. The net sediment amount that Sandy brought into the bay is only about 1% of the total amount of reworked sediment within the bay during the storm. Numerical experiments show that waves and vegetation played a critical role in sediment transport and associated wetland morphological change in Jamaica Bay. Furthermore, without the protection of vegetation, the marsh islands of Jamaica Bay would experience both more erosion and less accretion in coastal storms. 4. Impact of open-ocean convection on particle fluxes and sediment dynamics in the deep margin of the Gulf of Lions Directory of Open Access Journals (Sweden) M. Stabholz 2013-02-01 Full Text Available The deep outer margin of the Gulf of Lions and the adjacent basin, in the western Mediterranean Sea, are regularly impacted by open-ocean convection, a major hydrodynamic event responsible for the ventilation of the deep water in the western Mediterranean Basin. However, the impact of open-ocean convection on the flux and transport of particulate matter remains poorly understood. The variability of water mass properties (i.e., temperature and salinity, currents, and particle fluxes were monitored between September 2007 and April 2009 at five instrumented mooring lines deployed between 2050 and 2350-m depth in the deepest continental margin and adjacent basin. Four of the lines followed a NW–SE transect, while the fifth one was located on a sediment wave field to the west. The results of the main, central line SC2350 ("LION" located at 42°02.5′ N, 4°41′ E, at 2350-m depth, show that open-ocean convection reached mid-water depth (≈ 1000-m depth during winter 2007–2008, and reached the seabed (≈ 2350-m depth during winter 2008–2009. Horizontal currents were unusually strong with speeds up to 39 cm s−1 during winter 2008–2009. The measurements at all 5 different locations indicate that mid-depth and near-bottom currents and particle fluxes gave relatively consistent values of similar magnitude across the study area except during winter 2008–2009, when near-bottom fluxes abruptly increased by one to two orders of magnitude. Particulate organic carbon contents, which generally vary between 3 and 5%, were abnormally low (≤ 1% during winter 2008–2009 and approached those observed in surface sediments (≈ 0.6%. Turbidity profiles made in the region demonstrated the existence of a bottom nepheloid layer, several hundred meters thick, and related to the resuspension of bottom sediments. These observations support the view that open-ocean deep convection events in the Gulf of Lions can cause significant remobilization 5. Disposal in sea-bed geological formations. Properties of ocean sediments in relation to the disposal of radioactive waste International Nuclear Information System (INIS) Schultheiss, P.J.; Thomson, J. 1984-01-01 Work on the permeability and consolidation characteristics of sediment cores from the north-east Atlantic has shown that each sediment type studied has a unique void ratio/permeability relationship and that the permeability decreases with effective stress more rapidly for fine than for coarser grained material. Significant over-consolidation is also present in Pacific red clays from the deep-sea drilling project. Their permeability is less for a given void ratio than that of their Atlantic counterparts. A theoretical analysis is given of the effects on permeability of deep open burrows revealed by improved core handling techniques. Mineralogy and sediment and water chemistry of six cores from the Nares Abyssal Plain have demonstrated the effects of lateral sediment redistribution and have shown only mildly reducing conditions. Pore water studies on a 4 m Kasten core from Great Meteor East show oxygen falling to zero within 30 cm of the sediment surface 6. Late-Middle Quaternary lithostratigraphy and sedimentation patterns on the Alpha Ridge, central Arctic Ocean: Implications for Arctic climate variability on orbital time scales Science.gov (United States) Wang, Rujian; Polyak, Leonid; Xiao, Wenshen; Wu, Li; Zhang, Taoliang; Sun, Yechen; Xu, Xiaomei 2018-02-01 We use sediment cores collected by the Chinese National Arctic Research Expeditions from the Alpha Ridge to advance Quaternary stratigraphy and paleoceanographic reconstructions for the Arctic Ocean. Our cores show a good litho/biostratigraphic correlation to sedimentary records developed earlier for the central Arctic Ocean, suggesting a recovered stratigraphic range of ca. 0.6 Ma, suitable for paleoclimatic studies on orbital time scales. This stratigraphy was tested by correlating the stacked Alpha Ridge record of bulk XRF manganese, calcium and zirconium (Mn, Ca, Zr), to global stable-isotope (LR04-δ18O) and sea-level stacks and tuning to orbital parameters. Correlation results corroborate the applicability of presumed climate/sea-level controlled Mn variations in the Arctic Ocean for orbital tuning. This approach enables better understanding of the global and orbital controls on the Arctic climate. Orbital tuning experiments for our records indicate strong eccentricity (100-kyr) and precession (∼20-kyr) controls on the Arctic Ocean, probably implemented via glaciations and sea ice. Provenance proxies like Ca and Zr are shown to be unsuitable as orbital tuning tools, but useful as indicators of glacial/deglacial processes and circulation patterns in the Arctic Ocean. Their variations suggest an overall long-term persistence of the Beaufort Gyre circulation in the Alpha Ridge region. Some glacial intervals, e.g., MIS 6 and 4/3, are predominated by material presumably transported by the Transpolar Drift. These circulation shifts likely indicate major changes in the Arctic climatic regime, which yet need to be investigated. Overall, our results demonstrate applicability of XRF data to paleoclimatic studies of the Arctic Ocean. 7. Geochemistry of sediments Digital Repository Service at National Institute of Oceanography (India) Nath, B.N. Considering the potential of elemental data in marine sediments as diagnostic tools of various geological and oceanographic processes, sediment geochemical data from the Indian Ocean region has been reviewed in this article. Emphasis is laid... 8. Implementation of the vortex force formalism in the coupled ocean-atmosphere-wave-sediment transport (COAWST) modeling system for inner shelf and surf zone applications Science.gov (United States) Kumar, Nirnimesh; Voulgaris, George; Warner, John C.; Olabarrieta, Maitane 2012-01-01 The coupled ocean-atmosphere-wave-sediment transport modeling system (COAWST) enables simulations that integrate oceanic, atmospheric, wave and morphological processes in the coastal ocean. Within the modeling system, the three-dimensional ocean circulation module (ROMS) is coupled with the wave generation and propagation model (SWAN) to allow full integration of the effect of waves on circulation and vice versa. The existing wave-current coupling component utilizes a depth dependent radiation stress approach. In here we present a new approach that uses the vortex force formalism. The formulation adopted and the various parameterizations used in the model as well as their numerical implementation are presented in detail. The performance of the new system is examined through the presentation of four test cases. These include obliquely incident waves on a synthetic planar beach and a natural barred beach (DUCK' 94); normal incident waves on a nearshore barred morphology with rip channels; and wave-induced mean flows outside the surf zone at the Martha's Vineyard Coastal Observatory (MVCO). 9. Habitat filtering of bacterioplankton communities above polymetallic nodule fields and sediments in the Clarion-Clipperton zone of the Pacific Ocean. Science.gov (United States) Lindh, Markus V; Maillot, Brianne M; Smith, Craig R; Church, Matthew J 2018-04-01 Deep-sea mining of commercially valuable polymetallic nodule fields will generate a seabed sediment plume into the water column. Yet, the response of bacterioplankton communities, critical in regulating energy and matter fluxes in marine ecosystems, to such disturbances is unknown. Metacommunity theory, traditionally used in general ecology for macroorganisms, offers mechanistic understanding on the relative role of spatial differences compared with local environmental conditions (habitat filtering) for community assembly. We examined bacterioplankton metacommunities using 16S rRNA amplicons from the Clarion-Clipperton Zone (CCZ) in the eastern Pacific Ocean and in global ocean transect samples to determine sensitivity of these assemblages to environmental perturbations. Habitat filtering was the main assembly mechanism of bacterioplankton community composition in the epi- and mesopelagic waters of the CCZ and the Tara Oceans transect. Bathy- and abyssopelagic bacterioplankton assemblages were mainly assembled by undetermined metacommunity types or neutral and dispersal-driven patch-dynamics for the CCZ and the Malaspina transect. Environmental disturbances may alter the structure of upper-ocean microbial assemblages, with potentially even more substantial, yet unknown, impact on deep-sea communities. Predicting such responses in bacterioplankton assemblage dynamics can improve our understanding of microbially-mediated regulation of ecosystem services in the abyssal seabed likely to be exploited by future deep-sea mining operations. © 2018 Society for Applied Microbiology and John Wiley & Sons Ltd. 10. Science and Sandy: Lessons Learned Science.gov (United States) Werner, K. 2013-12-01 Following Hurricane Sandy's impact on the mid-Atlantic region, President Obama established a Task Force to '...ensure that the Federal Government continues to provide appropriate resources to support affected State, local, and tribal communities to improve the region's resilience, health, and prosperity by building for the future.' The author was detailed from NOAA to the Task Force between January and June 2013. As the Task Force and others began to take stock of the region's needs and develop plans to address them, many diverse approaches emerged from different areas of expertise including: infrastructure, management and construction, housing, public health, and others. Decision making in this environment was complex with many interests and variables to consider and balance. Although often relevant, science and technical expertise was not always at the forefront of this process. This talk describes the author's experience with the Sandy Task Force focusing on organizing scientific expertise to support the work of the Task Force. This includes a description of federal activity supporting Sandy recovery efforts, the role of the Task Force, and lessons learned from developing a science support function within the Task Force. 11. Occurrence of @iNeogloboquadrina pachyderma@@ new subspecies in the shelf-slope sediments of northern Indian Ocean Digital Repository Service at National Institute of Oceanography (India) Setty, M.G.A.P. ~'N in the Bay of Bengal. Studies of hydrological conditions in the Indian Ocean reveal that the Subtropical Subsurface Water Mass is traceable as far north as the Gulf of Aden, and the Indian Ocean Deep Bottom Water Mass originating in the deepest... 12. Anthropogenic radionuclides in sediments in the NW Pacific Ocean and its marginal seas. Results of the 1994-1995 Japanese-Korean-Russian expeditions International Nuclear Information System (INIS) Pettersson, H.B.L.; Amano, H.; Berezhnov, V.I.; Nikitin, A.; Veletova, N.K.; Chaykovskaya, E.; Chumichev, V.B.; Chung, C.S.; Gastaud, J.; Hirose, K.; Hong, G.H.; Kim, C.K.; Kim, S.H.; Lee, S.H.; Morimoto, T.; Oda, K.; Povinec, P.P.; Togawa, O.; Suzuki, E.; Tkalin, A.; Volkov, Y.; Yoshida, K. 1999-01-01 Assessment of contamination of anthropogenic radionuclides from past dumping of radioactive waste in areas of the Okhotsk Sea, NW Pacific Ocean and the Sea of Japan/East Sea has been performed. Two joint Japanese-Korean-Russian scientific expeditions were carried out in 1994-1995, where seawater and seabed sediments were samples from 22 sites. Results of sediment analysis are reported here, where concentrations of 90Sr, 137Cs, 238Pu, 239,240Pu and 241Am in surface layer and bulk sediments showed on large spatial variations, ranging between -1 dry wt., -1 dry wt., -1 dry wt., 0.006 and 2.0 Bq kg -1 dry wt., 0.03 and 1.8 Bq kg -1 dry wt., respectively. However, the concentrations are comparable with those found in reference sites outside the dumping areas and they generally fall within ranges previously reported for non-dumping areas of the investigated seas. Estimates of sediment inventories indicated differences in radionuclide load between shelf/slope and basin type sediments as well as dependence on water depth. Except for the shallow areas, most of the inventories of 90Sr, 137Cs and Pu isotopes are still to be found in the water column. Total inventories (in water+sediment) show a surplus of 137Cs and Pu-isotopes compared to expected integrated global fall-out deposition, which is consistent with previous observations in non-dumping areas in the seas investigated. Analysis of sediment 238Pu/239,240Pu activity ratios showed values in accord with that of global fall-out. Analysis of radionuclide depth distributions in core samples from areas of the Sea of Okhotsk showed sedimentation rates of 0.2-0.4 g cm -2 year -1 and 0.03 g cm -2 year -1 for shelf and basin areas respectively, which is similar to values found in the Sea of Japan/East Sea. Depth profiles of 90Sr, 137Cs and Pu isotopes in cores of the basin area indicate a typical delay compared to the input records of global fall-out 13. The composition and the source of hydrocarbons in sediments taken from the tectonically active Andaman Backarc Basin, Indian Ocean Digital Repository Service at National Institute of Oceanography (India) Chernova, T.G.; Rao, P.S.; Pikovskii, Yu.I.; Alekseeva, T.A.; Nath, B.N.; Rao, B.R.; Rao, Ch.M. or hydrothermal organic matter. Anthropogenic sources in region studied are of minor importance. From the results obtained, it may be deduced that the hydrocarbons in the sediments of the tectonically active part of the Andaman Basin are mainly due... 14. Hard substrate in the deep ocean: How sediment features influence epibenthic megafauna on the eastern Canadian margin Science.gov (United States) Lacharité, Myriam; Metaxas, Anna 2017-08-01 Benthic habitats on deep continental margins (> 1000 m) are now considered heterogeneous - in particular because of the occasional presence of hard substrate in a matrix of sand and mud - influencing the distribution of megafauna which can thrive on both sedimented and rocky substrates. At these depths, optical imagery captured with high-definition cameras to describe megafauna can also describe effectively the fine-scale sediment properties in the immediate vicinity of the fauna. In this study, we determined the relationship between local heterogeneity (10-100 sm) in fine-scale sediment properties and the abundance, composition, and diversity of megafauna along a large depth gradient (1000-3000 m) in a previously-unexplored habitat: the Northeast Fan, which lies downslope of submarine canyons off the Gulf of Maine (northwest Atlantic). Substrate heterogeneity was quantified using a novel approach based on principles of computer vision. This approach proved powerful in detecting gradients in sediment, and sporadic complex features (i.e. large boulders) in an otherwise homogeneous environment because it characterizes sediment properties on a continuous scale. Sediment heterogeneity influenced megafaunal diversity (morphospecies richness and Shannon-Wiener Index) and community composition, with areas of higher substrate complexity generally supported higher diversity. However, patterns in abundance were not influenced by sediment properties, and may be best explained by gradients in food supply. Our study provides a new approach to quantify fine-scale sediment properties and assess their role in shaping megafaunal communities in the deep sea, which should be included into habitat studies given their potential ecological importance. 15. Analyze of waves dynamic over an intertidal mudflat of a sandy-gravely estuarine beach - Field survey and preliminary modeling approach Science.gov (United States) Morio, Olivier; Sedrati, Mouncef; Goubert, Evelyne 2014-05-01 As well as marine submersion or erosive phenomena, clay-silted sediment in-filling on estuarial and bay beaches are a main issue in these human-attractive areas. Coupled sandy/gravely and clay/silty intertidal areas can be observed in these particular coastal areas, depending of rivers characteristic (discharge of particle, water flow), ocean dynamics (wave exposure, current) and sediments sources. All around the world, sandy/gravely beaches are exposed to punctual or continuous input clay sediments. Vilaine estuary, Bay of Arcachon and Bay of Seine in France, Plymouth Bay in UK and also Wadden Sea in Deutschland are few examples of muddy/sandy coupled or mixed system. The beach of Bétahon (Ambon town, Brittany - France) is located on the external Vilaine estuary and is an example of this issue. This meso-macrotidal intermediate (low tide terrace) beach presents heterogeneous sediments. The upper intertidal zone is composed by sand and gravel and characterized by a steep slope. A very gentle slope characterized the lower part of the beach and is constituted by silt and clay. Clay/sand limit is characterized by a decimetric erosion cliff of mudflat along the beach. In order to understand bed variations and sediment transport of this complex heterogeneous beach, a well understanding of wave dynamic across the beach is necessary. This study focus on wave dynamics over the beach, using field observations and MIKE 21 3D wave numerical model. This paper is a preliminary approach of an upcoming global understanding of this estuarial beach behavior. Swell from deep-sea to near-shore area is modeled over a 100 km² area and real wind, deep sea wave characteristic, river water flow and tidal level are defined as open boundary conditions for the regional model. This last one is based on multiple bathymetric surveys over the last 50 years. Local model, triangular mesh gridded to 5 meters, covering Bétahon beach , is based on topographic and photographic survey of the mudflat 16. Regional variations in provenance and abundance of ice-rafted clasts in Arctic Ocean sediments: Implications for the configuration of late Quaternary oceanic and atmospheric circulation in the Arctic Science.gov (United States) Phillips, R.L.; Grantz, A. 2001-01-01 The composition and distribution of ice-rafted glacial erratics in late Quaternary sediments define the major current systems of the Arctic Ocean and identify two distinct continental sources for the erratics. In the southern Amerasia basin up to 70% of the erratics are dolostones and limestones (the Amerasia suite) that originated in the carbonate-rich Paleozoic terranes of the Canadian Arctic Islands. These clasts reached the Arctic Ocean in glaciers and were ice-rafted to the core sites in the clockwise Beaufort Gyre. The concentration of erratics decreases northward by 98% along the trend of the gyre from southeastern Canada basin to Makarov basin. The concentration of erratics then triples across the Makarov basin flank of Lomonosov Ridge and siltstone, sandstone and siliceous clasts become dominant in cores from the ridge and the Eurasia basin (the Eurasia suite). The bedrock source for the siltstone and sandstone clasts is uncertain, but bedrock distribution and the distribution of glaciation in northern Eurasia suggest the Taymyr Peninsula-Kara Sea regions. The pattern of clast distribution in the Arctic Ocean sediments and the sharp northward decrease in concentration of clasts of Canadian Arctic Island provenance in the Amerasia basin support the conclusion that the modem circulation pattern of the Arctic Ocean, with the Beaufort Gyre dominant in the Amerasia basin and the Transpolar drift dominant in the Eurasia basin, has controlled both sea-ice and glacial iceberg drift in the Arctic Ocean during interglacial intervals since at least the late Pleistocene. The abruptness of the change in both clast composition and concentration on the Makarov basin flank of Lomonosov Ridge also suggests that the boundary between the Beaufort Gyre and the Transpolar Drift has been relatively stable during interglacials since that time. Because the Beaufort Gyre is wind-driven our data, in conjunction with the westerly directed orientation of sand dunes that formed during 17. Upper ocean carbon flux determined by the 234Th approach and sediment traps using size-fractionated POC and 234Th data from the Golf of Mexico International Nuclear Information System (INIS) Hung, Chin-Chang; Roberts, Kimberly A.; Santschi, Peter H.; Guo, Laodong 2004-01-01 Size-fractionated particulate 234 Th and particulate organic carbon (POC) fluxes were measured in the Gulf of Mexico during 2000 and 2001 in order to obtain a better estimation of upper ocean organic carbon export out of the euphotic zone within cold core and warm core rings, and to assess the relative merit of sediment trap and POC/ 234 Th methods. In 2000, the flux of POC measured by sediment traps at 120 m ranged from 60 to 148 mg C m -2 d -1 , while 234 Th-derived POC fluxes in large particles (>53 μm) varied from 18 to 61 mg C m -2 d -1 using the ratio of POC/ 234 Th at 120 m, and from 51 to 163 mg C m -2 d -1 using an average ratio of POC/ 234 Th for the upper 120 m water column. In 2001, the fluxes of POC measured by traps deployed at 120 m water depth ranged from 39 to 48 mg C m -2 d -1 , while the 234 Th-derived POC fluxes in large particles (>53 μm) varied from 7 to 37 mg C m -2 d -1 using a ratio of POC/ 234 Th at 120 m, and from 37 to 45 mg C m -2 d -1 using an average ratio of POC/ 234 Th within the 0-120 m interval. The results show that POC fluxes estimated by the 234 Th method using the average ratio of POC/ 234 Th within the euphotic zone are similar to those measured by sediment traps. Furthermore, the results demonstrate that the variability in POC export fluxes estimated by the 234 Th/ 238 U disequilibrium approach is strongly related to the ratio of POC/ 234 Th that is taken, and for which we have independent evidence that it may be controlled by the chemical composition of the suspended particles. The results also reveal that using POC/ 234 Th ratios in small particles may result in an estimate of the POC export flux that is considerably higher than when using POC/ 234 Th ratios in large particles (>53 μm). The POC flux calculated from ratios in large particles is, however, more comparable to the POC flux determined directly by sediment traps, but both of these estimates are much lower than that determined by using the POC/ 234 Th ratios in 18. Magnetic Hysteresis of Deep-Sea Sediments in Korea Deep Ocean Study(KODOS) Area, NE Pacific Science.gov (United States) Kim, K.; Park, C.; Yoo, C. 2001-12-01 The KODOS area within the Clarion-Clipperton fracture zone (C-C zone) is surrounded by the Hawaiian and Line Island Ridges to the west and the central American continent to the east. Topography of the seafloor consists of flat-topped abyssal hills and adjacent abyssal troughs, both of which run parallel in N-S direction. Sediments from the study area consist mainly of biogenic sediments. Latitudinal zonation of sedimentary facies was caused by the accumulation of biogenic materials associated with the equatorial current system and movement of the Pacific plate toward the north or northwest. The KODOS area belongs to the latitudinal transition zone having depositional characteristics between non-fossiliferous pelagic clay-dominated zone and calcareous sediment-dominated zone. The box core sediments of the KODOS area are analyzed in an attempt to obtain magnetic hysteresis information and to elucidate the relationship between hysteresis property and lithological facies. Variations in magnetic hysteresis parameters with unit layers reflect the magnetic grain-size and concentrations within the sediments. The ratios of remanant coercivity/coercive force (Hcr/Hc) and saturation remnance/saturation magnetization (Mrs/Ms) indicate that coarse magnetic grains are mainly distributed in dark brown sediments (lower part of the sediment core samples) reflecting high Hcr/Hc and low Mrs/Ms ratios. These results are mainly caused by dissolution differences with core depth. From the plotting of the ratios of hyteresis parameters, it is indicated that magnetic minerals in cubic samples are in pseudo-single domain (PSD) state. 19. The Early Shorebird Will Catch Fewer Invertebrates on Trampled Sandy Beaches. Science.gov (United States) Schlacher, Thomas A; Carracher, Lucy K; Porch, Nicholas; Connolly, Rod M; Olds, Andrew D; Gilby, Ben L; Ekanayake, Kasun B; Maslo, Brooke; Weston, Michael A 2016-01-01 Many species of birds breeding on ocean beaches and in coastal dunes are of global conservation concern. Most of these species rely on invertebrates (e.g. insects, small crustaceans) as an irreplaceable food source, foraging primarily around the strandline on the upper beach near the dunes. Sandy beaches are also prime sites for human recreation, which impacts these food resources via negative trampling effects. We quantified acute trampling impacts on assemblages of upper shore invertebrates in a controlled experiment over a range of foot traffic intensities (up to 56 steps per square metre) on a temperate beach in Victoria, Australia. Trampling significantly altered assemblage structure (species composition and density) and was correlated with significant declines in invertebrate abundance and species richness. Trampling effects were strongest for rare species. In heavily trafficked plots the abundance of sand hoppers (Amphipoda), a principal prey item of threatened Hooded Plovers breeding on this beach, was halved. In contrast to the consistently strong effects of trampling, natural habitat attributes (e.g. sediment grain size, compactness) were much less influential predictors. If acute suppression of invertebrates caused by trampling, as demonstrated here, is more widespread on beaches it may constitute a significant threat to endangered vertebrates reliant on these invertebrates. This calls for a re-thinking of conservation actions by considering active management of food resources, possibly through enhancement of wrack or direct augmentation of prey items to breeding territories. 20. Biogeographical distribution and diversity of microbes in methane hydrate-bearing deep marine sediments, on the Pacific Ocean Margin DEFF Research Database (Denmark) Inagaki, F.; Nunoura, T.; Nakagawa, S. 2006-01-01 The deep subseafloor biosphere is among the least-understood habitats on Earth, even though the huge microbial biomass therein plays an important role for potential long-term controls on global biogeochemical cycles. We report here the vertical and geographical distribution of microbes and their ......The deep subseafloor biosphere is among the least-understood habitats on Earth, even though the huge microbial biomass therein plays an important role for potential long-term controls on global biogeochemical cycles. We report here the vertical and geographical distribution of microbes...... of the uncultivated Deep-Sea Archaeal Group were consistently the dominant phylotype in sediments associated with methane hydrate. Sediment cores lacking methane hydrates displayed few or no Deep-Sea Archaeal Group phylotypes. Bacterial communities in the methane hydrate-bearing sediments were dominated by members... 1. 210Po/210Pb Activity Ratios as a Possible Dating Tool' of Ice Cores and Ice-rafted Sediments from the Western Arctic Ocean - Preliminary Results Science.gov (United States) Krupp, K.; Baskaran, M. M. 2016-02-01 We have collected and analyzed a suite of surface snow samples, ice cores, ice-rafted sediments (IRS) and aerosol samples from the Western Arctic for Po-210 and Pb-210 to examine the extent of disequilibrium between this pair to possibly use 210Po/210Pb activity ratio to date different layers of ice cores and time of incorporation of ice-rafted sediments into the sea ice. We have earlier reported that the activity concentrations of 210Pb in IRS vary over an order of magnitude and it is 1-2 orders of magnitude higher than that of the benthic sediments (1-2 dpm/g in benthic sediments compared to 25 to 300 dpm/g in IRS). In this study, we have measured 210Po/210Pb activity ratios in aerosols from the Arctic Ocean to constrain the initial 210Po/210Pb ratio at the time of deposition during precipitation. The 210Po activity concentration in recent snow is compared to surface ice samples. The age' of IRS incorporation can be calculated as follows: [210Po]measured = [210Po]initial + [210Pb] (1 - exp(-λt)) (1) where λ is the decay constant of 210Po, 138.4 days, and t' is the in-growth time period. From this equation, t' can be calculated as follows: t = (-1/λ) [ln (1- ((210Po/210Pb)measured - (210Po/210Pb)initial)] (2) The assumption involved in this approach are: i) there is no preferential uptake of 210Po (highly biogenic - S group); and iii) both 210Po and 210Pb remain as closed system. The calculated age using equation (2) will be discussed and presented. 2. 2012 USACE Post Sandy Topographic LiDAR: Virginia and Maryland Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — TASK ORDER NAME: VIRGINIA AND MARYLAND LIDAR ACQUISITION FOR SANDY RESPONSE CONTRACT NUMBER: W912P9-10-D-0533 TASK ORDER NUMBER: W81C8X2314841 Woolpert Project... 3. 2012 USACE Post Sandy Topographic LiDAR: Eastern Long Island, New York Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — TASK ORDER NAME: EASTERN LONG ISLAND, NEW YORK LIDAR ACQUISITION FOR SANDY RESPONSE CONTRACT NUMBER: W912P9-10-D-0533 TASK ORDER NUMBER: W81C8X23208588 Woolpert... 4. Sandy lower Gotherivian reservoirs in the south central Turkmeniya. [Siberia Energy Technology Data Exchange (ETDEWEB) Mavyyev, N.Ch.; Nedirov, B.R. 1982-01-01 Composition and capacitance-filtering properties of sandy rocks of the early Gotherivian age developed on the fields of Karadzhaulak and Cirili within the northeast slope of the Predkopetdag marginal trough and on areas of Dengli Bakharadok of the Bakharadok monocline are studied. These rocks are viewed as analogs of the gas-bearing Shatlyk level of the Murgabskiy Basin. They can be considered the main potential source of hydrocarbons on the studied territory. In the upper part of the lower Gotherivian, a level of sandy rocks is traced. Rocks represented by small-and average-grained red and light grey differences in sandstones of polymictic composition. The porosity of the sandstones is 20-22%, permeability is 200-500 mdarcy. Not only a similar stratigraphic position of the described sandstones in the lower Gotherivian was found, but also lithological common nature of the rocks. In the south central Turkmeniya one can isolate age analogs of the Shatlyk level, the main productive level of southeast Turkmeniya. The thickness of the sandy beds is from 17 to 45 m. The sandstones of the Karadzhaulak area have the best capacitance-filtering properties. Post sedimentation changes depend on the quantity and composition of the cement, influence of formation waters, and possibly thermobaric conditions of rock formation. The presence of sandy rocks with high collector properties in the cross section of the lower Gotherivian deposits in south central Turkmeniya should be considered in determining the objects for further prospecting and exploration. The areas of Kumbet and Karadzhaulak are primary. 5. Major, trace, and rare earth elements in the sediments of the Central Indian Ocean Basin: Their source and distribution Digital Repository Service at National Institute of Oceanography (India) Pattan, J.N.; Jauhari, P. The distribution maps of elements show that highest concentrations of Mn, Cu, Ni, Zn, Co, and biogenic opal in the surface sediment occurs between 10 degrees S and 16 degrees S latitude, where diagenetic ferromanganese nodules rich in Mn, Cu, Ni... 6. Integrated ocean drilling program expedition 341 Preliminary report: Southern Alaska margin- Interactions of tectonics, climate, and sedimentation Digital Repository Service at National Institute of Oceanography (India) Jaeger, J.M.; Asahi, H.; Gulick, S.S.; Bahlburg, H.; LeVay, L.J.; Belanger, C.L.; Slagle, A.L.; Berbel, G.B.B.; Drab, L.; Childress, L.B.; Cowan, E.A.; Konno, S.; Forwick, M.; Marz, C.E.; Fukumura, A.; Matsuzaki, K.M.; Ge, S.; McClymont, E.L.; Gupta, S.M.; et al. global increase in erosion rates and sediment delivery to basins. The effects of this increased erosion may be profound, as worldwide analyses of orogenic belts have shown that Earth systems cannot be considered to be the product of a series of distinct... 7. Rock magnetic and geochemical analyses of surface sediment characteristics in deep ocean environments: A case study across the Ryukyu Trench Science.gov (United States) Kawamura, N.; Kawamura, K.; Ishikawa, N. 2008-03-01 Magnetic minerals in marine sediments are often dissolved or formed with burial depth, thereby masking the primary natural remanent magnetization and paleoclimate signals. In order to clarify the present sedimentary environment and the progressive changes with burial depth in the magnetic properties, we studied seven cores collected from the Ryukyu Trench, southwest Japan. Magnetic properties, organic geochemistry, and interstitial water chemistry of seven cores are described. Bottom water conditions at the landward slope, trench floor, and seaward slope are relatively suboxic, anoxic, and oxic, respectively. The grain size of the sediments become gradually finer with the distance from Okinawa Island and finer with increasing water depth. The magnetic carriers in the sediments are predominantly magnetite and maghemized magnetite, with minor amounts of hematite. In the topmost sediments from the landward slope, magnetic minerals are diluted by terrigenous materials and microfossils. The downcore variations in magnetic properties and geochemical data provided evidence for the dissolution of fine-grained magnetite with burial depth under an anoxic condition. 8. An instrument to measure differential pore pressures in deep ocean sediments: Pop-Up-Pore-Pressure-Instrument (PUPPI) International Nuclear Information System (INIS) Schultheiss, P.J.; McPhail, S.D.; Packwood, A.R.; Hart, B. 1985-01-01 A Pop-Up-Pore-Pressure-Instrument (PUPPI) has been developed to measure differential pore pressures in sediments. The differential pressure is the pressure above or below normal hydrostatic pressure at the depth of the measurement. It is designed to operate in water depths up to 6000 metres for periods of weeks or months, if required, and measures differential pore pressures at depths of up to 3 metres into the sediments with a resolution of 0.05 kPa. It is a free-fall device with a lance which penetrates the sediments. This lance and the ballast weight is disposed when the PUPPI is acoustically released from the sea floor. When combined with permeability and porosity values of deep-sea sediments the pore pressure measurements made using the PUPPI suggest advection velocities as low as 8.8 mm/yr. The mechanical, electrical and acoustic systems are described together with data obtained from both shallow and deep water trials. (author) 9. Investigation of superstorm Sandy 2012 in a multi-disciplinary approach Science.gov (United States) Kunz, M.; Mühr, B.; Kunz-Plapp, T.; Daniell, J. E.; Khazai, B.; Wenzel, F.; Vannieuwenhuyse, M.; Comes, T.; Elmer, F.; Schröter, K.; Fohringer, J.; Münzberg, T.; Lucas, C.; Zschau, J. 2013-10-01 At the end of October 2012, Hurricane Sandy moved from the Caribbean Sea into the Atlantic Ocean and entered the United States not far from New York. Along its track, Sandy caused more than 200 fatalities and severe losses in Jamaica, The Bahamas, Haiti, Cuba, and the US. This paper demonstrates the capability and potential for near-real-time analysis of catastrophes. It is shown that the impact of Sandy was driven by the superposition of different extremes (high wind speeds, storm surge, heavy precipitation) and by cascading effects. In particular the interaction between Sandy and an extra-tropical weather system created a huge storm that affected large areas in the US. It is examined how Sandy compares to historic hurricane events, both from a hydro-meteorological and impact perspective. The distribution of losses to different sectors of the economy is calculated with simple input-output models as well as government estimates. Direct economic losses are estimated about USD 4.2 billion in the Caribbean and between USD 78 and 97 billion in the US. Indirect economic losses from power outages is estimated in the order of USD 16.3 billion. Modelling sector-specific dependencies quantifies total business interruption losses between USD 10.8 and 15.5 billion. Thus, seven years after the record impact of Hurricane Katrina in 2005, Hurricane Sandy is the second costliest hurricane in the history of the United States. 10. Investigation of superstorm Sandy 2012 in a multi-disciplinary approach Directory of Open Access Journals (Sweden) M. Kunz 2013-10-01 Full Text Available At the end of October 2012, Hurricane Sandy moved from the Caribbean Sea into the Atlantic Ocean and entered the United States not far from New York. Along its track, Sandy caused more than 200 fatalities and severe losses in Jamaica, The Bahamas, Haiti, Cuba, and the US. This paper demonstrates the capability and potential for near-real-time analysis of catastrophes. It is shown that the impact of Sandy was driven by the superposition of different extremes (high wind speeds, storm surge, heavy precipitation and by cascading effects. In particular the interaction between Sandy and an extra-tropical weather system created a huge storm that affected large areas in the US. It is examined how Sandy compares to historic hurricane events, both from a hydro-meteorological and impact perspective. The distribution of losses to different sectors of the economy is calculated with simple input-output models as well as government estimates. Direct economic losses are estimated about USD 4.2 billion in the Caribbean and between USD 78 and 97 billion in the US. Indirect economic losses from power outages is estimated in the order of USD 16.3 billion. Modelling sector-specific dependencies quantifies total business interruption losses between USD 10.8 and 15.5 billion. Thus, seven years after the record impact of Hurricane Katrina in 2005, Hurricane Sandy is the second costliest hurricane in the history of the United States. 11. A method for simulating sediment incipient motion varying with time and space in an ocean model (FVCOM): development and validation Science.gov (United States) Zhu, Zichen; Wang, Yongzhi; Bian, Shuhua; Hu, Zejian; Liu, Jianqiang; Liu, Lejun 2017-11-01 We modified the sediment incipient motion in a numerical model and evaluated the impact of this modification using a study case of the coastal area around Weihai, China. The modified and unmodified versions of the model were validated by comparing simulated and observed data of currents, waves, and suspended sediment concentrations (SSC) measured from July 25th to July 26th, 2006. A fitted Shields diagram was introduced into the sediment model so that the critical erosional shear stress could vary with time. Thus, the simulated SSC patterns were improved to more closely reflect the observed values, so that the relative error of the variation range decreased by up to 34.5% and the relative error of simulated temporally averaged SSC decreased by up to 36%. In the modified model, the critical shear stress values of the simulated silt with a diameter of 0.035 mm and mud with a diameter of 0.004 mm varied from 0.05 to 0.13 N/m2, and from 0.05 to 0.14 N/m 2, respectively, instead of remaining constant in the unmodified model. Besides, a method of applying spatially varying fractions of the mixed grain size sediment improved the simulated SSC distribution to fit better to the remote sensing map and reproduced the zonal area with high SSC between Heini Bay and the erosion groove in the modified model. The Relative Mean Absolute Error was reduced by between 6% and 79%, depending on the regional attributes when we used the modified method to simulate incipient sediment motion. But the modification achieved the higher accuracy in this study at a cost of computation speed decreasing by 1.52%. 12. The ecology of sandy beaches in Natal African Journals Online (AJOL) The ecology of sandy beaches in Natal. A.H. Dye, A. Mclachlan and T. Wooldridge. Department of Zoology, University of Port Elizabeth, Port Elizabeth. Data from an ecological survey of four sandy beaches on the. Natal coast of South Africa are presented. Physical para· meters such as beach profile, particle size, moisture, ... 13. Rare earth element and neodymium isotope tracing of element input and past ocean circulation. Study from north and south pacific seawater and sediments Energy Technology Data Exchange (ETDEWEB) Froellje, Henning 2016-08-09 Ocean circulation and cycling of trace elements within the oceanic water column is of great significance for modern and past climates. The global overturning circulation is responsible for the distribution of water masses, heat and particulate and dissolved compounds, while biological and chemical processes, such as primary productivity or particle scavenging, control the cycling of nutrients and trace elements in the ocean, and ultimately influence the ocean-atmosphere exchange of carbon. Rare earth elements (REE) and neodymium (Nd) isotopes are widely used as tracers for lithogenic element fluxes and modern and past ocean circulation and water mass mixing. The use of Nd isotopes in paleoceanographic investigations is based on the precise knowledge of processes involved in REE cycling and of the modern oceanic Nd isotope distribution. The Pacific is the largest of the world oceans, but it is highly underrepresented in present-day and past seawater Nd isotope and REE investigations compared to the Atlantic Ocean. In this study, Nd isotopes and REEs are analysed in North Pacific seawater (chapter 2) and sediment samples from the South Pacific (chapters 3-5) to contribute to a better understanding of sources and cycling of REEs and Nd isotopes in present-day seawater and to investigate past water mass mixing and circulation changes during the last glacial termination and throughout the last glacial-interglacial cycle. Neodymium isotopes in seawater and sedimentary archives (fossil fish teeth and debris, foraminifera, ferromanganese oxides, lithogenic particles) were analysed using multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS), and REE concentrations were analysed using isotope dilution ICP-MS. Results from combined analysis of REEs, and Nd and radium isotopes from North Pacific seawater (coastal seawaters of the Hawaiian Island of Oahu and seawater from the offshore Hawaii Ocean Time-series Station ALOHA) show a clear influence of the 14. Rare earth element and neodymium isotope tracing of element input and past ocean circulation. Study from north and south pacific seawater and sediments International Nuclear Information System (INIS) Froellje, Henning 2016-01-01 Ocean circulation and cycling of trace elements within the oceanic water column is of great significance for modern and past climates. The global overturning circulation is responsible for the distribution of water masses, heat and particulate and dissolved compounds, while biological and chemical processes, such as primary productivity or particle scavenging, control the cycling of nutrients and trace elements in the ocean, and ultimately influence the ocean-atmosphere exchange of carbon. Rare earth elements (REE) and neodymium (Nd) isotopes are widely used as tracers for lithogenic element fluxes and modern and past ocean circulation and water mass mixing. The use of Nd isotopes in paleoceanographic investigations is based on the precise knowledge of processes involved in REE cycling and of the modern oceanic Nd isotope distribution. The Pacific is the largest of the world oceans, but it is highly underrepresented in present-day and past seawater Nd isotope and REE investigations compared to the Atlantic Ocean. In this study, Nd isotopes and REEs are analysed in North Pacific seawater (chapter 2) and sediment samples from the South Pacific (chapters 3-5) to contribute to a better understanding of sources and cycling of REEs and Nd isotopes in present-day seawater and to investigate past water mass mixing and circulation changes during the last glacial termination and throughout the last glacial-interglacial cycle. Neodymium isotopes in seawater and sedimentary archives (fossil fish teeth and debris, foraminifera, ferromanganese oxides, lithogenic particles) were analysed using multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS), and REE concentrations were analysed using isotope dilution ICP-MS. Results from combined analysis of REEs, and Nd and radium isotopes from North Pacific seawater (coastal seawaters of the Hawaiian Island of Oahu and seawater from the offshore Hawaii Ocean Time-series Station ALOHA) show a clear influence of the 15. Hydrothermal Fe cycling and deep ocean organic carbon scavenging: Model-based evidence for significant POC supply to seafloor sediments Digital Repository Service at National Institute of Oceanography (India) German, C.R.; Legendre, L.L.; Sander, S.G.;; Niquil, N.; Luther-III, G.W.; LokaBharathi, P.A.; Han, X.; LeBris, N. by more than ~10% over background values, what the model does indicate is that scavenging of carbon in association with Fe-rich hydrothermal plume particles should play a significant role in the delivery of particulate organic carbon to deep ocean... 16. High-resolution sub-bottom seismic and sediment core records from the Chukchi Abyssal Plain reveal Quaternary glaciation impacts on the western Arctic Ocean Science.gov (United States) Joe, Y. J.; Seokhoon, Y.; Nam, S. I.; Polyak, L.; Niessen, F. 2017-12-01 For regional context of the Quaternary history of Arctic marine glaciations, such as glacial events in northern North America and on the Siberian and Chukchi margins, we used CHIRP sub-bottom profiles (SBP) along with sediment cores, including a 14-m long piston core ARA06-04JPC taken from the Chukchi abyssal plain during the RV Araon expedition in 2015. Based on core correlation with earlier developed Arctic Ocean stratigraphies using distribution of various sedimentary proxies, core 04JPC is estimated to extend to at least Marine Isotope Stage 13 (>0.5 Ma). The stratigraphy developed for SBP lines from the Chukchi abyssal plain to surrounding slopes can be divided into four major seismostratigraphic units (SSU 1-4). SBP records from the abyssal plain show well preserved stratification, whereas on the surrounding slopes this pattern is disrupted by lens-shaped, acoustically transparent sedimentary bodies interpreted as glaciogenic debris flow deposits. Based on the integration of sediment physical property and SBP data, we conclude that these debris flows were generated during several ice-sheet grounding events on the Chukchi and East Siberian margins, including adjacent ridges and plateaus, during the middle to late Quaternary. 17. SEDIMENT PROPERTIES and Other Data from FIXED PLATFORM and Other Platforms From North Pacific Ocean from 19881030 to 19911024 (NODC Accession 9300040) Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The accession contains data collected in North Pacific Ocean from Hawaiian Ocean Time Series (HOTS) project for years 1, 2 and 3 as part of Joint Global Ocean Flux... 18. Microbial biomass and organic nutrients in the deep-sea sediments of the Central Indian Ocean Basin Digital Repository Service at National Institute of Oceanography (India) Raghukumar, C.; Sheelu, G.; LokaBharathi, P.A.; Nair, S.; Mohandass, C. PMNprogramoftheDepartmentofOceanDevelopment,GovernmentofIndia.TheauthorsarethankfultothescientistsandcrewAASidorentzoforcooperationandassistanceduringthelongcruisesinthesamplingarea.The authorsalsowishtothankMs.GowriRivonkarfortechnicalassistanceinthelaboratory.NIOContributionNo.3509.ThispaperispartoftheseriespublishedinthespecialissueofMarineGeoresourcesandGeotechnology,Volume18,Number3,2000. AddresscorrespondencetoChandralataRaghukumar,NationalInstituteof...%)wastwotimesmorethanthatofthe MicrobialBiomassandOrganicNutrientsinCIOB 9 Figure4.(a)± (e)ConcentrationofLOM,TOC,andlivingbiomass-Cin® vecoresatdiŒerentdepths. bacterialisolates(34%).Ontheotherhand,numberofbacterialisolatesproducingproteasewere2.5timesmorethanthefungalisolates... 19. IMPLEMENTASI SANDI HILL UNTUK PENYANDIAN CITRA Directory of Open Access Journals (Sweden) JJ Siang 2002-01-01 Full Text Available Hill's code is one of text encoding technique. In this research, Hill's code is extended to image encoding. The image used is BMP 24 bit format. 2x2 and 3x3 matrices is used as a key. The results show that Hill's code is suitable for image whose RGB values vary highly. On the contrary, it is not suitable for less varied RGB images since its original pattern is still persisted in encrypted image. Hill's code for image encoding has also disadvantage in the case that the key matrix is not unique. However, for daily application, with good key matrix, Hill's code can be applied to encode image since it's process only deals with simple matrix operation so it become fast. Abstract in Bahasa Indonesia : Sandi Hill merupakan salah satu teknik penyandian teks. Dalam penelitian ini, pemakaian sandi Hill diperluas dari teks ke citra bertipe BMP 24 bit. Matriks yang dipakai berordo 2x2 dan 3x3. Hasil percobaan menunjukkan bahwa sandi Hill cocok untuk enkripsi citra dengan variasi nilai RGB antar piksel berdekatan yang tinggi (seperti foto, tapi tidak cocok untuk citra dengan variasi nilai RGB yang rendah (seperti gambar kartun karena pola citra asli masih tampak dalam citra sandi. Sandi Hill juga memiliki kelemahan dalam hal tidak tunggalnya matriks kunci yang dapat dipakai. Akan tetapi untuk pemakaian biasa, dengan pemilihan matriks kunci yang baik, sandi Hill dapat dipakai untuk penyandian karena hanya melibatkan operasi matriks biasa sehingga prosesnya relatif cepat. Kata kunci: Sandi Hill, Citra, Relatif Prima. 20. Distribution of major, trace and rare-earth elements in surface sediments of the Wharton Basin, Indian Ocean Digital Repository Service at National Institute of Oceanography (India) Pattan, J.N.; Rao, Ch.M.; Higgs, N.C.; Colley, S.; Parthiban, G. indicate the presence of sodic feldspars in the clays (Nohara and Kato, 1985 ) or preferential biological removal of Na from seawater by certain calcareous organisms (EI- Wakeel and Riley, 1961 ). In deep-sea sediments, phosphorus is mainly present....C., 1991. The accumalation of barium in marine phytoplankton grown in culture. J. Mar. Res., 49: 339-354. Froelich, P.N., Bender, M.L., Luedtke, N.A., Heath, G.R. and De Vties, T., 1982. The marine phosphorus cycle. Am. J. Sci., 282:474-511. Glasby, G... 1. Uncertainties in sandy shorelines evolution under the Bruun rule assumption Directory of Open Access Journals (Sweden) Gonéri eLe Cozannet 2016-04-01 Full Text Available In the current practice of sandy shoreline change assessments, the local sedimentary budget is evaluated using the sediment balance equation, that is, by summing the contributions of longshore and cross-shore processes. The contribution of future sea-level-rise induced by climate change is usually obtained using the Bruun rule, which assumes that the shoreline retreat is equal to the change of sea-level divided by the slope of the upper shoreface. However, it remains unsure that this approach is appropriate to account for the impacts of future sea-level rise. This is due to the lack of relevant observations to validate the Bruun rule under the expected sea-level rise rates. To address this issue, this article estimates the coastal settings and period of time under which the use of the Bruun rule could be (invalidated, in the case of wave-exposed gently-sloping sandy beaches. Using the sedimentary budgets of Stive (2004 and probabilistic sea-level rise scenarios based on IPCC, we provide shoreline change projections that account for all uncertain hydrosedimentary processes affecting idealized coasts (impacts of sea-level rise, storms and other cross-shore and longshore processes. We evaluate the relative importance of each source of uncertainties in the sediment balance equation using a global sensitivity analysis. For scenario RCP 6.0 and 8.5 and in the absence of coastal defences, the model predicts a perceivable shift toward generalized beach erosion by the middle of the 21st century. In contrast, the model predictions are unlikely to differ from the current situation in case of scenario RCP 2.6. Finally, the contribution of sea-level rise and climate change scenarios to sandy shoreline change projections uncertainties increases with time during the 21st century. Our results have three primary implications for coastal settings similar to those provided described in Stive (2004 : first, the validation of the Bruun rule will not necessarily be 2. Fate of copper complexes in hydrothermally altered deep-sea sediments from the Central Indian Ocean Basin. Digital Repository Service at National Institute of Oceanography (India) Chakraborty, P.; Sander, S.G.; Jayachandran, S.; Nath, B.N.; Nagaraju, G.; Chennuri, K.; Vudamala, K.; Lathika, N.; Mascarenhas-Pereira, M.B.L. concentrations of TOC in the studied sediments reduced the concentrations of Cu associated with organic matter in all the studied sediments. Cu present as residual fraction (Fraction 4, Cures) were within the range of (~ 58-82%) and possibly bound to sulphides... Fraction 2 CuFe Fraction 3 CuCorg Fraction 4 Cures Cu- Feox1 Cu- Feox2 Cu- Femag AAS-61 BC 8 4-10cm 354±6 0.3±1 40.7±3.1 0.3±0.1 58.5±3.2 69.3±4.6 29.7±1.2 1.0±0.2 10-15cm 253±7 0.1±0.1 20.7±2.5 0.0 79.2±5.8 63.0±3.2 33.9±2.2 3... 3. Analysis of storm-tide impacts from Hurricane Sandy in New York Science.gov (United States) Schubert, Christopher E.; Busciolano, Ronald J.; Hearn, Paul P.; Rahav, Ami N.; Behrens, Riley; Finkelstein, Jason S.; Monti, Jack; Simonson, Amy E. 2015-07-21 The hybrid cyclone-nor’easter known as Hurricane Sandy affected the mid-Atlantic and northeastern United States during October 28-30, 2012, causing extensive coastal flooding. Prior to storm landfall, the U.S. Geological Survey (USGS) deployed a temporary monitoring network from Virginia to Maine to record the storm tide and coastal flooding generated by Hurricane Sandy. This sensor network augmented USGS and National Oceanic and Atmospheric Administration (NOAA) networks of permanent monitoring sites that also documented storm surge. Continuous data from these networks were supplemented by an extensive post-storm high-water-mark (HWM) flagging and surveying campaign. The sensor deployment and HWM campaign were conducted under a directed mission assignment by the Federal Emergency Management Agency (FEMA). The need for hydrologic interpretation of monitoring data to assist in flood-damage analysis and future flood mitigation prompted the current analysis of Hurricane Sandy by the USGS under this FEMA mission assignment. 4. Sandy a změna klimatu Czech Academy of Sciences Publication Activity Database Pecho, Jozef 2013-01-01 Roč. 92, č. 7 (2013), s. 408-411 ISSN 0042-4544 Institutional support: RVO:68378289 Keywords : hurricanes * climate change Subject RIV: DG - Athmosphere Sciences, Meteorology http://www.vesmir.cz/clanek/sandy-a-zmena-klimatu 5. Heterotrophic bacterial populations in tropical sandy beaches Digital Repository Service at National Institute of Oceanography (India) Nair, S.; LokaBharathi, P.A. Distribution pattern of heterotrophic bacterial flora of three sandy beaches of the west coast of India was studied. The population in these beaches was microbiologically different. Population peaks of halotolerant and limnotolerant forms were... 6. Measurement of biological oxygen demand sandy beaches African Journals Online (AJOL) Measurements of biological oxygen demand in a sandy beach using conventional .... counting the cells present in a sample of aged seawater and comparing this with .... This activity peaked at 71 % above the undisturbed level after 16 hours. 7. Anaerobic oxidation of methane at a marine methane seep in a forearc sediment basin off Sumatra, Indian Ocean Directory of Open Access Journals (Sweden) Michael eSiegert 2011-12-01 Full Text Available A cold methane-seep was discovered in a forearc sediment basin off the island Sumatra, exhibiting a methane-seep adapted microbial community. A defined seep centre of activity, like in mud volcanoes, was not discovered. The seep area was rather characterized by a patchy distribution of active spots. The relevance of AOM was reflected by 13C depleted isotopic signatures of dissolved inorganic carbon (DIC. The anaerobic conversion of methane to CO2 was confirmed in a 13C-labelling experiment. Methane fuelled a vital microbial and invertebrate community which was reflected in cell numbers of up to 4 x 109 cells cm 3 sediment and 13C depleted guts of crabs populating the seep area. The microbial community was analysed by total cell counting, catalyzed reporter deposition – fluorescence in situ hybridisation (CARD-FISH, quantitative real-time PCR (qPCR and denaturing gradient gel electrophoresis (DGGE. CARD-FISH cell counts and qPCR measurements showed the presence of Bacteria and Archaea, but only small numbers of Eukarya. The archaeal community comprised largely members of ANME-1 and ANME-2. Furthermore, members of the Crenarchaeota were frequently detected in the DGGE analysis. Three major bacterial phylogenetic groups (δ-Proteobacteria, candidate division OP9 and Anaerolineaceae were abundant across the study area. Several of these sequences were closely related to the genus Desulfococcus of the family Desulfobacteraceae, which is in good agreement with previously described AOM sites. In conclusion, the majority of the microbial community at the seep consisted of AOM related microorganisms, while the relevance of higher hydrocarbons as microbial substrates was negligible. 8. Anaerobic Oxidation of Methane at a Marine Methane Seep in a Forearc Sediment Basin off Sumatra, Indian Ocean. Science.gov (United States) Siegert, Michael; Krüger, Martin; Teichert, Barbara; Wiedicke, Michael; Schippers, Axel 2011-01-01 A cold methane seep was discovered in a forearc sediment basin off the island Sumatra, exhibiting a methane-seep adapted microbial community. A defined seep center of activity, like in mud volcanoes, was not discovered. The seep area was rather characterized by a patchy distribution of active spots. The relevance of anaerobic oxidation of methane (AOM) was reflected by (13)C-depleted isotopic signatures of dissolved inorganic carbon. The anaerobic conversion of methane to CO(2) was confirmed in a (13)C-labeling experiment. Methane fueled a vital microbial community with cell numbers of up to 4 × 10(9) cells cm(-3) sediment. The microbial community was analyzed by total cell counting, catalyzed reporter deposition-fluorescence in situ hybridization (CARD-FISH), quantitative real-time PCR (qPCR), and denaturing gradient gel electrophoresis (DGGE). CARD-FISH cell counts and qPCR measurements showed the presence of Bacteria and Archaea, but only small numbers of Eukarya. The archaeal community comprised largely members of ANME-1 and ANME-2. Furthermore, members of the Crenarchaeota were frequently detected in the DGGE analysis. Three major bacterial phylogenetic groups (δ-Proteobacteria, candidate division OP9, and Anaerolineaceae) were abundant across the study area. Several of these sequences were closely related to the genus Desulfococcus of the family Desulfobacteraceae, which is in good agreement with previously described AOM sites. In conclusion, the majority of the microbial community at the seep consisted of AOM-related microorganisms, while the relevance of higher hydrocarbons as microbial substrates was negligible. 9. Strength Characteristics of Reinforced Sandy Soil OpenAIRE S. N. Bannikov; Mahamed Al Fayez 2005-01-01 Laboratory tests on determination of reinforced sandy soil strength characteristics (angle of internal friction, specific cohesive force) have been carried out with the help of a specially designed instrument and proposed methodology. Analysis of the obtained results has revealed that cohesive forces are brought about in reinforced sandy soil and an angle of internal soil friction becomes larger in comparison with non-reinforced soil. 10. Effects of mud sedimentation on lugworm ecosystem engineering NARCIS (Netherlands) Montserrat, F.; Suykerbuyk, W.; Al-Busaidi, R.; Bouma, T.J.; Van der Wal, D.; Herman, P.M.J. 2011-01-01 Benthic ecosystem engineering organisms attenuate hydrodynamic or biogeochemical stress to ameliorate living conditions. Bioturbating infauna, like the lugworm Arenicola marina, determine intertidal process dynamics by maintaining the sediment oxygenated and sandy. Maintaining the permeability of 11. Constraining Depositional Slope From Sedimentary Structures in Sandy Braided Streams Science.gov (United States) Lynds, R. M.; Mohrig, D.; Heller, P. L. 2003-12-01 Determination of paleoslopes in ancient fluvial systems has potentially broad application to quantitatively constraining the history of tectonics and paleoclimate in continental sequences. Our method for calculating paleoslopes for sandy braided streams is based upon a simple physical model that establishes depositional skin-frictional shear stresses from assemblages of sedimentary structures and their associated grain size distributions. The addition of a skin-frictional shear stress, with a geometrically determined form-drag shear stress results in a total boundary shear stress which is directly related to water-surface slope averaged over an appropriate spatial scale. In order to apply this model to ancient fluvial systems, it is necessary to measure the following: coarsest suspended sediment size, finest grain size carried in bed load, flow depth, dune height, and dune length. In the rock record, suspended load and bed load can be accurately assessed by well-preserved suspended load deposits ("low-energy" ripples) and bed load deposits (dune foresets). This model predicts an average slope for the North Loup River near Taylor, Nebraska (modern case study) of 2.7 x 10-3. The measured reach-averaged water surface slope for the same reach of the river is 1.37 x 10-3. We suggest that it is possible to calculate the depositional slope of a sandy fluvial system by a factor of approximately two. Additionally, preliminary application of this model to the Lower Jurassic Kayenta Formation throughout the Colorado Plateau provides a promising and consistent evaluation of paleoslope in an ancient and well-preserved, sandy braided stream deposit. 12. Analysis of Fluvial Sediment Discharges into Kubanni Reservoir ... African Journals Online (AJOL) The sediment discharges into the Kubanni Reservoir (KR) has been measured and analysed in this study. The predominant sandy-clay sediment in the reservoir has an estimated total sediment load of 20,387,000 kg/year. The depth and area coverage of the reservoir was surveyed using a defined distributed grid line ... 13. Return to the Strangelove Ocean?: Preliminary results of carbon and oxygenisotope compositions of post-impact sediments, IODP Expedition 364 "Chicxulub Impact Crater" Science.gov (United States) Yamaguchi, K. E.; Ikehara, M.; Hayama, H.; Takiguchi, S.; Masuda, S.; Ogura, C.; Fujita, S.; Kurihara, E.; Matsumoto, T.; Oshio, S.; Ishihata, K.; Fuchizawa, Y.; Noda, H.; Sakurai, U.; Yamane, T.; Morgan, J. V.; Gulick, S. P. S. 2017-12-01 The Chicxulub crater in the northern Yucatan Peninsula, Mexico was formed by the asteroid impact at the Cretaceous-Paleogene boundary (66.0 Ma). In early 2016 the IODP Exp. 364 successfully drilled the materials from the topographic peak ring within the crater that was previously identified by seismological observations. A continuous core was recovered. The 112m-thick uppermost part of the continuous core (505.7-1334.7 mbsf) is post-impact sediments, including the PETM, that are mainly composed of carbonate with intercalation of siliciclastics and variable contents of organic carbon. More than 300 samples from the post-impact section were finely powdered for a variety of geochemical analysis. Here we report their carbon and oxygen isotope compositions of the carbonate fraction (mostly in the lower part of the analyzed section) and carbon and nitrogen isotope compositions of organic matter (mostly in the middle-upper part of the analyzed section). Isotope mass spectrometer Isoprime was used for the former analysis, and EA-irMS (elemental analyzer - isotope ratio mass spectrometer) was used for the latter analysis, both at CMCR, Kochi Univ. Depth profile of oxygen isotope compositions of carbonate fraction is variable and somewhat similar to those of Zachos et al. (2001; Science). Carbon isotope compositions of carbonate and organic carbon in the lower part of the analyzed section exhibit some excursions that could correspond to the hyperthemals in the early Paleogene. Their variable nitrogen isotope compositions reflect temporal changes in the style of biogeochemical cycles involving denitrification and nitrogen fixation. Coupled temporal changes in the carbon isotope compositions of organic and carbonate carbon immediately after the K-Pg boundary might support a Strangelove ocean (Kump, 1991; Geology), however high export production (Ba/Ti, nannoplankton and calcisphere blooms, high planktic foram richness, and diverse and abundant micro- and macrobenthic organisms 14. Disentangling diversity patterns in sandy beaches along environmental gradients. Science.gov (United States) Barboza, Francisco R; Gómez, Julio; Lercari, Diego; Defeo, Omar 2012-01-01 Species richness in sandy beaches is strongly affected by concurrent variations in morphodynamics and salinity. However, as in other ecosystems, different groups of species may exhibit contrasting patterns in response to these environmental variables, which would be obscured if only aggregate richness is considered. Deconstructing biodiversity, i.e. considering richness patterns separately for different groups of species according to their taxonomic affiliation, dispersal mode or mobility, could provide a more complete understanding about factors that drive species richness patterns. This study analyzed macroscale variations in species richness at 16 Uruguayan sandy beaches with different morphodynamics, distributed along the estuarine gradient generated by the Rio de la Plata over a 2 year period. Species richness estimates were deconstructed to discriminate among taxonomic groups, supralittoral and intertidal forms, and groups with different feeding habits and development modes. Species richness was lowest at intermediate salinities, increasing towards oceanic and inner estuarine conditions, mainly following the patterns shown for intertidal forms. Moreover, there was a differential tolerance to salinity changes according to the habitat occupied and development mode, which determines the degree of sensitivity of faunal groups to osmotic stress. Generalized (additive and linear) mixed models showed a clear increase of species richness towards dissipative beaches. All taxonomic categories exhibited the same trend, even though responses to grain size and beach slope were less marked for crustaceans and insects than for molluscs or polychaetes. However, supralittoral crustaceans exhibited the opposite trend. Feeding groups decreased from dissipative to reflective systems, deposit feeders being virtually absent in the latter. This deconstructive approach highlights the relevance of life history strategies in structuring communities, highlighting the relative 15. Trophic niche shifts driven by phytoplankton in sandy beach ecosystems Science.gov (United States) Bergamino, Leandro; Martínez, Ana; Han, Eunah; Lercari, Diego; Defeo, Omar 2016-10-01 Stable isotopes (δ13C and δ15N) together with chlorophyll a and densities of surf diatoms were used to analyze changes in trophic niches of species in two sandy beaches of Uruguay with contrasting morphodynamics (i.e. dissipative vs. reflective). Consumers and food sources were collected over four seasons, including sediment organic matter (SOM), suspended particulate organic matter (POM) and the surf zone diatom Asterionellopsis guyunusae. Circular statistics and a Bayesian isotope mixing model were used to quantify food web differences between beaches. Consumers changed their trophic niche between beaches in the same direction of the food web space towards higher reliance on surf diatoms in the dissipative beach. Mixing models indicated that A. guyunusae was the primary nutrition source for suspension feeders in the dissipative beach, explaining their change in dietary niche compared to the reflective beach where the proportional contribution of surf diatoms was low. The high C/N ratios in A. guyunusae indicated its high nutritional value and N content, and may help to explain the high assimilation by suspension feeders at the dissipative beach. Furthermore, density of A. guyunusae was higher in the dissipative than in the reflective beach, and cell density was positively correlated with chlorophyll a only in the dissipative beach. Therefore, surf diatoms are important drivers in the dynamics of sandy beach food webs, determining the trophic niche space and productivity. Our study provides valuable insights on shifting foraging behavior by beach fauna in response to changes in resource availability. 16. High-resolution magnetostratigraphic and biostratigraphic study of Ethiopian traps-related products in Oligocene sediments from the Indian Ocean Science.gov (United States) Touchard, Yannick; Rochette, Pierre; Aubry, Marie Pierre; Michard, Annie 2003-02-01 Volcanic traps correspond typically to aerial emissions of more than 10 6 km 3 of magma over 1 Myr periods. The potential global impact of such emissions makes the precise correlation of traps with the global magnetobiochronologic timescale an important task. Our study is focused on the Ethiopian traps which correspond to the birth of the Afar hotspot at the triple junction between the Red Sea, Aden Gulf and East-African rift. The Ethiopian traps have a significant acidic component (about 10% of the traps by volume) which enables more efficient stratospheric aerosol diffusion than for the main basaltic eruptions. Furthermore, a magnetostratigraphy is well established for the traps: traps activity began in Chron C11r.2r and ended in Chron C11r.1r or C10r, with well clustered 40Ar/ 39Ar ages at 30±0.5 Ma. Four tephra layers, marked by prominent magnetic susceptibility peaks, occur in Oligocene sections of sites from Ocean Drilling Program Leg 115, drilled in the southern Indian Ocean near Madingley Rise, 2600 km away from the Ethiopian traps. In order to demonstrate that these tephra layers are related to the Ethiopian traps, a high-resolution study of sites 709 and 711 was undertaken, involving magnetostratigraphy and nannofossil stratigraphy, together with isotopic and geochemical characterization of the tephra. Geochemical analyses and isotope ratios of the glass shards indicate the same acid continental source for these tephras which is compatible with the Ethiopian signature. Moreover, Hole 711A provides a reliable magnetostratigraphy for the Oligocene (Chrons 13-9). The tephra layers occur in the interval spanning Chrons C11n.2n-C11n.1n which agrees with the positions of acidic layers in the traps. Calcareous nannofossil stratigraphy confirms the magnetostratigraphic interpretation, with the NP23/24 zonal boundary occurring within the interval containing the tephra layers. Hole 709B supports the results from Hole 711A. Thus, the Ethiopian traps can be 17. Pleistocene Arid and Wet Climatic Variability: Imprint of Glacial Climate, Tectonics and Oceanographic Events in the Sediments of the se Indian Ocean, Western Australia Science.gov (United States) McHugh, C. M.; Castaneda, J.; Kominz, M. A.; Gallagher, S. J.; Gurnis, M.; Ishiwa, T.; Mamo, B. L.; Henderiks, J.; Christensen, B. A.; Groeneveld, J.; Yokoyama, Y.; Mustaque, S.; Iqbal, F. 2017-12-01 The interaction between the evolving tectonic configuration of the Indo Pacific region as a result of the northward migration of the Australian continent, and its collision with the Banda Arc began in the Late Miocene ( 8 Ma ago). This constriction played an important role in the diversion of the Indonesian Throughflow and initiation of the Leeuwin Current. These events coupled to Pleistocene glaciations left a significant imprint in the sediments offshore western Australia. The International Ocean Discovery Program Expedition 356 drilled in shelf depths of the Carnarvon and Perth Basins recovering a thick section of Pleistocene sediment from Sites U1461 (440 m thick) and U1460 (306 m), respectively. Analyses of the lithology (logs, grain size), chemistry (X-ray elemental analyses) and an initial age model constructed from biostratigraphy and radiocarbon ages were interpreted within the framework of multichannel seismic profiles. Radiocarbon ages provide control for MIS 1-4, and the identification of glacial cycles is based on shipboard biostratigraphy best developed for Site U1460. Arid and high productivity signals are linked with glacial stages. Wet conditions are associated with river discharge, terrigenous sediments and linked with interglacial stages. Except for one very pronounced interval the productivity signal during interglacials is low. High productivity during glacial stages is related to upwelling linked to the southward flowing Leeuwin Current. Comparison of the northernmost (U1461) with southernmost (U1460) sites reveals a strong arid and wet climatic variability beginning in the Pleistocene. This variability is most pronounced in the late Pleistocene post 0.8-1.0 Ma and can be correlated with glacial-interglacial cycles, especially in the more humid southern Site that was closer to the Subantarctic Front and influenced by the Westerlies. In Site U1461 we recovered the 135m thick Gorgon slide. Its occurrence at 1 Ma coincides with a rapid tectonic 18. Nineteen-year time-series sediment trap study of Coccolithus pelagicus and Emiliania huxleyi (calcareous nannoplankton) fluxes in the Bering Sea and subarctic Pacific Ocean Science.gov (United States) Tsutsui, Hideto; Takahashi, Kozo; Asahi, Hirofumi; Jordan, Richard W.; Nishida, Shiro; Nishiwaki, Niichi; Yamamoto, Sumito 2016-03-01 Coccolithophore fluxes at two sediment trap stations, Station AB in the Bering Sea and Station SA in the subarctic Pacific Ocean, were studied over a nineteen-year (August 1990-July 2009) interval. Two major species, Coccolithus pelagicus and Emiliania huxleyi, occur at both stations, with Gephyrocapsa oceanica, Umbilicosphaera sibogae, Braarudosphaera bigelowii, and Syracosphaera spp. as minor components. The mean coccolithophore fluxes at Stations AB and SA increased from 28.9×106 m2 d-1 and 61.9×106 m2 d-1 in 1990-1999 to 54.4×106 m2 d-1 and 130.2×106 m2 d-1 in 2002-2009, respectively. Furthermore, in late 1999 to early 2000, there was a significant shift in the most dominant species from E. huxleyi to C. pelagicus. High abundances of E. huxleyi correspond to the positive mode of the Pacific Decadal Oscillation (PDO), while those of C. pelagicus respond to the PDO negative mode and are related to water temperature changes at huxleyi. At both stations the mean seawater temperature in the top 45 m from August to October increased ca. 1 °C with linear recurrence from 1990 to 2008. The coccosphere fluxes after Year 2000 at Stations AB and SA, and the shift in species dominance, may have been influenced by this warming. 19. Three new records of Desmodorids (Nematoda, Desmodoridae) from sandy seabeds of the Canary islands OpenAIRE Riera, Rodrigo; Núñez, Jorge; Brito, María del Carmen 2012-01-01 In an ecological study of meiofaunal assemblages in two locations (Los Abrigos and Los Cristianos) of Tenerife (Canary Islands, NE Atlantic Ocean), several desmodorid species were found throughout the study period. Three species belonging to the family Desmodoridae were collected in intertidal and shallow subtidal sandy seabeds. These species were Desmodorella aff. tenuispiculum Allgen, 1928, Metachromadora sp. and Spirinia parasitifera Bastian, 1865. Descriptions, figures and tables with mer... 20. 2013-2014 U.S. Geological Survey CMGP LiDAR: Post Sandy (MA, NH, RI) Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — TASK NAME: New England CMGP Sandy Lidar LiDAR Data Acquisition and Processing Production Task USGS Contract No. G10PC00057 Task Order No. G13PD00796 Woolpert Order... 1. Correlation between Hurricane Sandy damage along the New Jersey coast with land use, dunes and other local attributes. Science.gov (United States) 2013-08-01 The goal of this study was to evaluate the effectiveness of sand dunes along New Jerseys Coast in reducing damage during Sandy. The study area included eight selected zones with different damage levels from Ocean County. A model to independently p... 2. Inverse Relationship of Marine Aerosol and Dust in Antarctic Ice with Fine-Grained Sediment in the South Atlantic Ocean: Implications for Sea-Ice Coverage and Wind Strength Directory of Open Access Journals (Sweden) Sharon L. Kanfoush 2012-03-01 Full Text Available This research seeks to test the hypothesis that natural gamma radiation (NGR from Ocean Drilling Program Site 1094, which displays variability over the last glacial-interglacial cycle similar to dust in the Vostok ice core, reflects fine-grained terrigenous sediment delivered by eolian processes. Grain size was measured on 400 samples spanning 0–20 m in a composite core. Accumulation of the <63μ size fraction at Site 1094 and dust in Vostok exhibit a negative correlation, suggesting the fine sediments are not dominantly eolian. However the technique used for grain size measurements cannot distinguish between terrigenous and biogenous materials; therefore it is possible much fine-grained material is diatoms. An inverse correlation between fine sediments and NGR supports this interpretation, and implies terrigenous materials were at times diluted by microfossils from high biological productivity. Fine marine sediments correlate positively with temperature and negatively with marine aerosol Na+ in Vostok. One plausible explanation is extensive sea-ice of cold intervals steepened ocean-continent temperature gradients, intensified winds, and led to increased transport of dust and marine aerosol to Antarctica yet also reduced biological productivity at Site 1094. Such a reduction despite increases in NGR, potentially representing Fe-rich dust influx, would require light limitation or stratification associated with sea-ice. 3. Operational Group Sandy technical progress report Science.gov (United States) , 2013-01-01 Hurricane Sandy made US landfall near Atlantic City, NJ on 29 October 2012, causing 72 direct deaths, displacing thousands of individuals from damaged or destroyed dwellings, and leaving over 8.5 million homes without power across the northeast and mid-Atlantic. To coordinate federal rebuilding activities in the affected region, the President established the cabinet-level Hurricane Sandy Rebuilding Task Force (Task Force). The Task Force was charged with identifying opportunities for achieving rebuilding success while supporting economic vitality, improving public health and safety, protecting and enhancing natural and manmade infrastructure, bolstering resilience, and ensuring appropriate accountability. 4. Iceberg and meltwater discharge events in the western Arctic Ocean since MIS 5: a comparison of sediment cores off the East Siberian and Chukchi margins Science.gov (United States) Xiao, W.; Wang, R.; Zhang, T.; Duan, X.; Polyak, L. 2017-12-01 In the Pleistocene the western Arctic Ocean was affected by deglacial discharge events from ice sheets in northern North America as well as the East Siberian and Chukchi margins. Distribution of Ice Rafted Debris (IRD) >250 μm and planktonic foraminiferal N. pachyderma (sin.) (Nps) δ18O and δ13C was compared in CHINARE sediment cores ARC2-M03 (Wang et al., 2013) and ARC3-P37 from the Chukchi Abyssal Plain and Northwind Ridge, respectively, to identify the impacts of icebergs and meltwater on paleoceanographic environments since MIS 5. The IRD is mainly composed of quartz grains and fragments of clastic rocks and detrital carbonates. The carbonates, mostly dolomites characteristic of the Canadian Arctic Archipelago (CAA) provenance, typically anti-correlate with quartz and clastic rocks, indicating different sources such as Chukchi-Alaskan or East Siberian margin. Most of the Nps δ18O depletions correspond to peaks in detrital carbonates, suggesting a strong influence of meltwater from the Laurentide Ice Sheet (LIS) on the western Arctic Ocean. A conspicuous dark gray interval interpreted to represent glacial/deglacial environments of MIS 4/3 age, shows a remarkable depletion in Nps δ13C along with high δ18O values and absence of IRD. This unusual signature may be related to a persistent sea-ice cover and/or high fluxes of terrigenous material with deglacial debris flows. In a younger grey interval corresponding to MIS2, high abundances of quartz and clastic rocks in the Northwind Ridge core ARC3-P37 indicate iceberg discharge from areas other than CAA, such as the Mackenzie LIS lobe or Chukchi-Alaskan margin. The MIS2-Holocene transition is marked by an increase in detrital carbonates co-occurring with Nps δ13C and δ18O depletion (Polyak et al., 2007), indicative of LIS iceberg/meltwater fluxes from the CAA. We note that stable-isotope events in the study area may go unnoticed because of gaps in foraminiferal records related to dissolution and/or adverse 5. Seasonal and inter-annual dynamics of suspended sediment at the mouth of the Amazon river: The role of continental and oceanic forcing, and implications for coastal geomorphology and mud bank formation Science.gov (United States) Gensac, Erwan; Martinez, Jean-Michel; Vantrepotte, Vincent; Anthony, Edward J. 2016-04-01 Fine-grained sediments supplied to the Ocean by the Amazon River and their transport under the influence of continental and oceanic forcing drives the geomorphic change along the 1500 km-long coast northward to the Orinoco River delta. The aim of this study is to give an encompassing view of the sediment dynamics in the shallow coastal waters from the Amazon River mouth to the Capes region (northern part of the Amapa region of Brazil and eastern part of French Guiana), where large mud banks are formed. Mud banks are the overarching features in the dynamics of the Amazon-Orinoco coast. They start migrating northward in the Capes region. Suspended Particulate Matter (SPM) concentrations were calculated from satellite products (MODIS Aqua and Terra) acquired over the period 2000-2013. The Census-X11 decomposition method used to discriminate short-term, seasonal and long-term time components of the SPM variability has rendered possible a robust analysis of the impact of continental and oceanic forcing. Continental forcing agents considered are the Amazon River water discharge, SPM concentration and sediment discharge. Oceanic forcing comprises modelled data of wind speed and direction, wave height and direction, and currents. A 150 km-long area of accretion is detected at Cabo Norte that may be linked with a reported increase in the river's sediment discharge concurrent with the satellite data study period. We also assess the rate of mud bank migration north of Cabo Norte, and highlight its variability. Although we confirm a 2 km y-1 migration rate, in agreement with other authors, we show that this velocity may be up to 5 km y-1 along the Cabo Orange region, and we highlight the effect of water discharge by major rivers debouching on this coastal mud belt in modulating such rates. Finally, we propose a refined sediment transport pattern map of the region based on our results and of previous studies in the area such as the AMASSEDS programme, and discuss the 6. Paleoclimate of Quaternary Costa Rica: Analysis of Sediment from ODP Site 1242 in the Eastern Tropical Pacific to Explore the Behavior of the Intertropical Convergence Zone (ITCZ) and Oceanic Circulation Science.gov (United States) Buczek, C. R.; Joseph, L. H. 2017-12-01 Studies of grain size, magnetic fabric, and terrigenous mass accumulation rates (MAR) on oceanic sediment can provide insights into climatic conditions present at or near the time of deposition by helping to delineate changes in rainfall and oceanic circulation intensities. The fairly homogenous hemipelagic nannofossil clays and clayey nannofossil oozes collected in the upper portion of Ocean Drilling Program (ODP) Site 1242 provide a 1.4 million year sediment record from the Cocos Ridge, in relatively shallow waters of the eastern tropical Pacific Ocean, off the coast of present day Central and South America. Information about shifts in rainfall and oceanic circulation provided by this study may be helpful in understanding changes in the location and behavior of the Intertropical Convergence Zone (ITCZ), and/or other climatic factors, in this area during the Pleistocene and Holocene Epochs. Approximately 130 paired side-by-side samples were selected at approximately evenly spaced intervals throughout the uppermost 190 mcd of the core. To obtain terrigenous grain size and MARs, one set of sediment samples was subject to a five-step chemical extraction process to dissolve any oxy-hydroxy coatings, remove the biogenic carbonate and silicate components, and sieve out grains larger than 63 µm. The pre- and post-extraction weights were compared to calculate a terrigenous weight percent (%) from which the terrigenous MAR values were then calculated, with the use of linear sediment rates and dry bulk density measurements determined from shipboard ODP 1242 analyses. Magnetic fabric, or anisotropy of magnetic susceptibility (AMS), was analyzed on a KLY4S-Kappabridge using the second set of samples taken in pmag cubes. Terrigenous MAR values range between 3.1 and 10.9 g/cm2/kyr, while P' (AMS) values range between 1.004 and 1.04 SI. A distinctive trend is noted in both factors, with both exhibiting relatively high initial values that then decrease from the beginning of the 7. Sandy Hook : alternative access concept plan and vehicle replacement study Science.gov (United States) 2009-06-01 This study addresses two critical issues of concern to the Sandy Hook Unit of Gateway National : Recreational Area: (1) options for alternative access to Sandy Hook during peak summer season, : particularly when the park is closed to private vehicles... 8. Assessment of grass root effects on soil piping in sandy soils using the pinhole test Science.gov (United States) Bernatek-Jakiel, Anita; Vannoppen, Wouter; Poesen, Jean 2017-10-01 Soil piping is an important land degradation process that occurs in a wide range of environments. Despite an increasing number of studies on this type of subsurface erosion, the impact of vegetation on piping erosion is still unclear. It can be hypothesized that vegetation, and in particular plant roots, may reduce piping susceptibility of soils because roots of vegetation also control concentrated flow erosion rates or shallow mass movements. Therefore, this paper aims to assess the impact of grass roots on piping erosion susceptibility of a sandy soil. The pinhole test was used as it provides quantitative data on pipeflow discharge, sediment concentration and sediment discharge. Tests were conducted at different hydraulic heads (i.e., 50 mm, 180 mm, 380 mm and 1020 mm). Results showed that the hydraulic head was positively correlated with pipeflow discharge, sediment concentration and sediment discharge, while the presence of grass roots (expressed as root density) was negatively correlated with these pipeflow characteristics. Smaller sediment concentrations and sediment discharges were observed in root-permeated samples compared to root-free samples. When root density exceeds 0.5 kg m- 3, piping erosion rates decreased by 50% compared to root-free soil samples. Moreover, if grass roots are present, the positive correlation between hydraulic head and both sediment discharge and sediment concentration is less pronounced, demonstrating that grass roots become more effective in reducing piping erosion rates at larger hydraulic heads. Overall, this study demonstrates that grass roots are quite efficient in reducing piping erosion rates in sandy soils, even at high hydraulic head (> 1 m). As such, grass roots may therefore be used to efficiently control piping erosion rates in topsoils. 9. The ecology of sandy beaches in Transkei African Journals Online (AJOL) Data from an ecological survey of three sandy beaches in. Transkei and from Gulu beach on the eastern Cape coast,. South Africa, are presented. Physical parameters such as beach profile, sand particle size, Eh and carbonate content, as well as abundance, composition, biomass and distribution of the macrofauna and ... 10. Lessons from Hurricane Sandy for port resilience. Science.gov (United States) 2013-12-01 New York Harbor was directly in the path of the most damaging part of Hurricane Sandy causing significant impact on many of the : facilities of the Port of New York and New Jersey. The U.S. Coast Guard closed the entire Port to all traffic before the... 11. Transportation during and after Hurricane Sandy. Science.gov (United States) 2012-11-01 "Hurricane Sandy demonstrated the strengths and limits of the transportation infrastructure in New York City and the surrounding region. As a result of the timely and thorough preparations by New York City and the MTA, along with the actions of city ... 12. Early Paleogene variations in the calcite compensation depth : New constraints using old borehole sediments from across Ninetyeast Ridge, central Indian Ocean NARCIS (Netherlands) Slotnick, B. S.; Lauretano, V.; Backman, J.; Dickens, G. R.; Sluijs, A.; Lourens, L. 2015-01-01 Major variations in global carbon cycling occurred between 62 and 48 Ma, and these very likely related to changes in the total carbon inventory of the ocean-atmosphere system. Based on carbon cycle theory, variations in the mass of the ocean carbon should be reflected in contemporaneous global ocean 13. An analysis of the synoptic and dynamical characteristics of hurricane Sandy (2012) Science.gov (United States) Varlas, George; Papadopoulos, Anastasios; Katsafados, Petros 2018-01-01 Hurricane Sandy affected the Caribbean Islands and the Northeastern United States in October 2012 and caused 233 fatalities, severe rainfalls, floods, electricity blackouts, and 75 billion U.S. dollars in damages. In this study, the synoptic and dynamical characteristics that led to the formation of the hurricane are investigated. The system was driven by the interaction between the polar jet displacement and the subtropical jet stream. In particular, Sandy was initially formed as a tropical depression system over the Caribbean Sea and the unusually warm sea drove its intensification. The interaction between a rapidly approaching trough from the northwest and the stagnant ridge over the Atlantic Ocean drove Sandy to the northeast coast of United States. To better understand the dynamical characteristics and the mechanisms that triggered Sandy, a non-hydrostatic mesoscale model has been used. Model results indicate that the surface heat fluxes and the moisture advection enhanced the convective available potential energy, increased the low-level convective instability, and finally deepened the hurricane. Moreover, the upper air conditions triggered the low-level frontogenesis and increased the asymmetry of the system which finally affected its trajectory. 14. Swashed away? Storm impacts on sandy beach macrofaunal communities Science.gov (United States) Harris, Linda; Nel, Ronel; Smale, Malcolm; Schoeman, David 2011-09-01 Storms can have a large impact on sandy shores, with powerful waves eroding large volumes of sand off the beach. Resulting damage to the physical environment has been well-studied but the ecological implications of these natural phenomena are less known. Since climate change predictions suggest an increase in storminess in the near future, understanding these ecological implications is vital if sandy shores are to be proactively managed for resilience. Here, we report on an opportunistic experiment that tests the a priori expectation that storms impact beach macrofaunal communities by modifying natural patterns of beach morphodynamics. Two sites at Sardinia Bay, South Africa, were sampled for macrofauna and physical descriptors following standard sampling methods. This sampling took place five times at three- to four-month intervals between April 2008 and August 2009. The second and last sampling events were undertaken after unusually large storms, the first of which was sufficiently large to transform one site from a sandy beach into a mixed shore for the first time in living memory. A range of univariate (linear mixed-effects models) and multivariate (e.g. non-metric multidimensional scaling, PERMANOVA) methods were employed to describe trends in the time series, and to explore the likelihood of possible explanatory mechanisms. Macrofaunal communities at the dune-backed beach (Site 2) withstood the effects of the first storm but were altered significantly by the second storm. In contrast, macrofauna communities at Site 1, where the supralittoral had been anthropogenically modified so that exchange of sediments with the beach was limited, were strongly affected by the first storm and showed little recovery over the study period. In line with predictions from ecological theory, beach morphodynamics was found to be a strong driver of temporal patterns in the macrofaunal community structure, with the storm events also identified as a significant factor, likely 15. Study of the geochemistry of the cosmogenic isotope {sup 10}Be and the stable isotope {sup 9}Be in oceanic environment. Application to marine sediment dating; Etude de la geochimie de lisotope cosmogenique {sup 10}Be et de son isotope stable {sup 9}Be en milieu oceanique. Application a la datation des sediments marins Energy Technology Data Exchange (ETDEWEB) Bourles, D 1988-01-01 The radioisotope {sup 10}Be is formed by spallation reactions in the atmosphere. It is transferred to the oceans in soluble form by precipitation and dry deposition. The stable isotope {sup 9}Be comes from erosion of soils and rocks in the Earths crust. It is transported by wind and rivers and introduced to the oceans probably in both soluble and insoluble form. {sup 9}Be was measured by atomic absorption spectrometry and {sup 10}Be by A.M.S. The distribution of {sup 10}Be and {sup 9}Be between each phase extracted and the {sup 10}Be/{sup 9}Be ratios associated were studied in recent marine sediments from Atlantic, Pacific, Indian oceans and Mediterranean sea. The results show that for beryllium the two essential constituent phases of marine sediments are: - the authigenic phase incorporates the soluble beryllium and the detritic phase. The {sup 10}Be/{sup 9}Be ratio associated with the authigenic fraction varies with location. This suggests that the residence time of beryllium in the soluble phase is lower or comparable to the mixing time of the oceans. The evolution with time of the authigenic {sup 10}Be/{sup 9}Be ratio is discussed. 16. Fine organic particles in a sandy beach system (Puck Bay, Baltic Sea Directory of Open Access Journals (Sweden) Lech Kotwicki 2005-06-01 Full Text Available A total of over 550 samples of particulate organic matter (POM were obtained from swash and groundwater samples taken on a monthly basis from seven localities on the sandy shores of Puck Bay in 2002 and 2003. Sandy sediment cores from the swash zone were collected to assess the amount of POM in the pore waters. The mean annual concentrations of POM varied between localities from 20 to 500 mg in groundwater and from 6 to 200 mg dm-3 in swash water. The carbon/nitrogen (C/N ratio in suspended matter was always higher in groundwater (annual mean 12 than in swash water (annual mean 7. The C/N ratio indicates a local, algal origin of POM in the shallow coastal zone. 17. Responses of soil fungal community to the sandy grassland restoration in Horqin Sandy Land, northern China. Science.gov (United States) Wang, Shao-Kun; Zuo, Xiao-An; Zhao, Xue-Yong; Li, Yu-Qiang; Zhou, Xin; Lv, Peng; Luo, Yong-Qing; Yun, Jian-Ying 2016-01-01 Sandy grassland restoration is a vital process including re-structure of soils, restoration of vegetation, and soil functioning in arid and semi-arid regions. Soil fungal community is a complex and critical component of soil functioning and ecological balance due to its roles in organic matter decomposition and nutrient cycling following sandy grassland restoration. In this study, soil fungal community and its relationship with environmental factors were examined along a habitat gradient of sandy grassland restoration: mobile dunes (MD), semi-fixed dunes (SFD), fixed dunes (FD), and grassland (G). It was found that species abundance, richness, and diversity of fungal community increased along with the sandy grassland restoration. The sequences analysis suggested that most of the fungal species (68.4 %) belonged to the phylum of Ascomycota. The three predominant fungal species were Pleospora herbarum, Wickerhamomyces anomalus, and Deconica Montana, accounting for more than one fourth of all the 38 species. Geranomyces variabilis was the subdominant species in MD, Pseudogymnoascus destructans and Mortierella alpine were the subdominant species in SFD, and P. destructans and Fungi incertae sedis were the dominant species in FD and G. The result from redundancy analysis (RDA) and stepwise regression analysis indicated that the vegetation characteristics and soil properties explain a significant proportion of the variation in the fungal community, and aboveground biomass and C:N ratio are the key factors to determine soil fungal community composition during sandy grassland restoration. It was suggested that the restoration of sandy grassland combined with vegetation and soil properties improved the soil fungal diversity. Also, the dominant species was found to be alternative following the restoration of sandy grassland ecosystems. 18. Long-term vegetation, climate and ocean dynamics inferred from a 73,500 years old marine sediment core (GeoB2107-3) off southern Brazil Science.gov (United States) Gu, Fang; Zonneveld, Karin A. F.; Chiessi, Cristiano M.; Arz, Helge W.; Pätzold, Jürgen; Behling, Hermann 2017-09-01 Long-term changes in vegetation and climate of southern Brazil, as well as ocean dynamics of the adjacent South Atlantic, were studied by analyses of pollen, spores and organic-walled dinoflagellate cysts (dinocysts) in marine sediment core GeoB2107-3 collected offshore southern Brazil covering the last 73.5 cal kyr BP. The pollen record indicates that grasslands were much more frequent in the landscapes of southern Brazil during the last glacial period if compared to the late Holocene, reflecting relatively colder and/or less humid climatic conditions. Patches of forest occurred in the lowlands and probably also on the exposed continental shelf that was mainly covered by salt marshes. Interestingly, drought-susceptible Araucaria trees were frequent in the highlands (with a similar abundance as during the late Holocene) until 65 cal kyr BP, but were rare during the following glacial period. Atlantic rainforest was present in the northern lowlands of southern Brazil during the recorded last glacial period, but was strongly reduced from 38.5 until 13.0 cal kyr BP. The reduction was probably controlled by colder and/or less humid climatic conditions. Atlantic rainforest expanded to the south since the Lateglacial period, while Araucaria forests advanced in the highlands only during the late Holocene. Dinocysts data indicate that the Brazil Current (BC) with its warm, salty and nutrient-poor waters influenced the study area throughout the investigated period. However, variations in the proportion of dinocyst taxa indicating an eutrophic environment reflect the input of nutrients transported mainly by the Brazilian Coastal Current (BCC) and partly discharged by the Rio Itajaí (the major river closest to the core site). This was strongly related to changes in sea level. A stronger influence of the BCC with nutrient rich waters occurred during Marine Isotope Stage (MIS) 4 and in particular during the late MIS 3 and MIS 2 under low sea level. Evidence of Nothofagus pollen 19. Rebuilding Emergency Care After Hurricane Sandy. Science.gov (United States) Lee, David C; Smith, Silas W; McStay, Christopher M; Portelli, Ian; Goldfrank, Lewis R; Husk, Gregg; Shah, Nirav R 2014-04-09 A freestanding, 911-receiving emergency department was implemented at Bellevue Hospital Center during the recovery efforts after Hurricane Sandy to compensate for the increased volume experienced at nearby hospitals. Because inpatient services at several hospitals remained closed for months, emergency volume increased significantly. Thus, in collaboration with the New York State Department of Health and other partners, the Health and Hospitals Corporation and Bellevue Hospital Center opened a freestanding emergency department without on-site inpatient care. The successful operation of this facility hinged on key partnerships with emergency medical services and nearby hospitals. Also essential was the establishment of an emergency critical care ward and a system to monitor emergency department utilization at affected hospitals. The results of this experience, we believe, can provide a model for future efforts to rebuild emergency care capacity after a natural disaster such as Hurricane Sandy. (Disaster Med Public Health Preparedness. 2014;0:1-4). 20. Sedimentation in a river dominated estuary CSIR Research Space (South Africa) Cooper, JAG 1993-10-01 Full Text Available The Mgeni Estuary on the wave dominated cast coast of South Africa occupies a narrow, bedrock confined, alluvial valley and is partially blocked at the coast by an elongate sandy barrier. Fluvial sediment extends to the barrier and marine depositon... 1. Keurbooms Estuary floods and sedimentation Directory of Open Access Journals (Sweden) Eckart H. Schumann 2015-11-01 Full Text Available The Keurbooms Estuary at Plettenberg Bay lies on a wave-dominated, microtidal coast. It has a dune-topped sandy barrier, or barrier dune, almost 4 km long, with a narrow back-barrier lagoon connected to its source rivers, the Keurbooms and Bitou. The estuary exits to the sea through this barrier dune, and it is the geomorphology and mouth position in relation to floods, which is the subject of this paper. Measurements of rainfall, water level, waves and high- and low-tide water lines were used to analyse the mouth variability over the years 2006–2012. Two major floods occurred during this time, with the first in November 2007 eroding away more than 500 000 m3 of sediment. The new mouth was established at the Lookout Rocks limit – the first time since 1915. The second flood occurred in July 2012 and opened up a new mouth about 1 km to the north-east; high waves also affected the position of the breach. The mouth has a tendency to migrate southwards against the longshore drift, but at any stage this movement can be augmented or reversed. The effectiveness of floods in breaching a new mouth through the barrier dune depends on the flood size and the nature of the exit channel in the back-barrier lagoon. Other factors such as ocean waves, sea level, vegetative state of the dune and duration of the flood are also important and can determine where the breach occurs, and if the new mouth will dominate the old mouth. 2. A Paleogeographic and Depositional Model for the Neogene Fluvial Succession, Pishin Belt, Northwest Pakistan: Effect of Post Collisional Tectonics on Sedimentation in a Peripheral Foreland Setting DEFF Research Database (Denmark) Kasi, Aimal Khan; Kassi, Akhtar Muhammad; Umar, Muhammad 2018-01-01 . During the Early Miocene, subaerial sedimentation started after the final closure of the Katawaz Remnant Ocean. Based on detailed field data, twelve facies were recognized in Neogene successions exposed in the Pishin Belt. These facies were further organized into four facies associations i.e. channels......‐story sandstone and/or conglomerate channels, lateral accretion surfaces (point bars) and alluvial fans. Neogene sedimentation in the Pishin Belt was mainly controlled by active tectonism and thrusting in response to the oblique collision of the Indian Plate with the Afghan Block of the Eurasian Plate along......, crevasse splay, natural levee and floodplain facies associations. Facies associations and variations provided ample evidence to recognize a number of fluvial architectural components in the succession e.g., low‐sinuosity sandy braided river, mixed‐load meandering, high‐sinuosity meandering channels, single... 3. Spatial and temporal small-scale variation in groundwater quality of a shallow sandy aquifer DEFF Research Database (Denmark) Bjerg, Poul Løgstrup; Christensen, Thomas Højlund 1992-01-01 The groundwater quality of a shallow unconfined sandy aquifer has been characterized for pH, alkalinity, chloride, nitrate, sulfate, calcium, magnesium, sodium and potassium in terms of vertical and horizontal variations (350 groundwater samples). The test area is located within a farmland lot....... The geology of the area described on the basis of 31 sediment cores appears relatively homogeneous. Large vertical and horizontal variations were observed. The vertical variations are strongly affected by the deviating composition of the agricultural infiltration water. The horizontal variations show very... 4. Field experiment on multicomponent ion exchange in a sandy aquifer International Nuclear Information System (INIS) Bjerg, P.L.; Christensen, T.H. 1990-01-01 A field experiment is performed in a sandy aquifer in order to study ion exchange processes and multicomponent solute transport modeling. An injection of groundwater spiked with sodium and potassium chloride was performed over a continuous period of 37 days. The plume is monitored by sampling 350 filters in a spatial grid. The sampling aims at establishing compound (calcium, magnesium, potassium, sodium, chloride) breakthrough curves at various filters 15 to 100 m from the point of injection and areal distribution maps at various cross sections from 0 to 200 m from the point of injection. A three-dimensional multicomponent solute transport model will be used to model the field experiments. The chemical model includes cation exchange, precipitation, dissolution, complexation, ionic strength and the carbonate system. Preliminary results from plume monitoring show that the plume migration is relatively well controlled considering the scale and conditions of the experiment. The transverse dispersion is small causing less dilution than expected. The ion exchange processes have an important influence on the plume composition. Retardation of the injected ions is substantial, especially for potassium. Calcium exhibits a substantial peak following chloride due to release from the ion exchange sites on the sediment. (Author) (8 refs., 5 figs., tab.) 5. Epidemic gasoline exposures following Hurricane Sandy. Science.gov (United States) Kim, Hong K; Takematsu, Mai; Biary, Rana; Williams, Nicholas; Hoffman, Robert S; Smith, Silas W 2013-12-01 Major adverse climatic events (MACEs) in heavily-populated areas can inflict severe damage to infrastructure, disrupting essential municipal and commercial services. Compromised health care delivery systems and limited utilities such as electricity, heating, potable water, sanitation, and housing, place populations in disaster areas at risk of toxic exposures. Hurricane Sandy made landfall on October 29, 2012 and caused severe infrastructure damage in heavily-populated areas. The prolonged electrical outage and damage to oil refineries caused a gasoline shortage and rationing unseen in the USA since the 1970s. This study explored gasoline exposures and clinical outcomes in the aftermath of Hurricane Sandy. Prospectively collected, regional poison control center (PCC) data regarding gasoline exposure cases from October 29, 2012 (hurricane landfall) through November 28, 2012 were reviewed and compared to the previous four years. The trends of gasoline exposures, exposure type, severity of clinical outcome, and hospital referral rates were assessed. Two-hundred and eighty-three gasoline exposures were identified, representing an 18 to 283-fold increase over the previous four years. The leading exposure route was siphoning (53.4%). Men comprised 83.0% of exposures; 91.9% were older than 20 years of age. Of 273 home-based calls, 88.7% were managed on site. Asymptomatic exposures occurred in 61.5% of the cases. However, minor and moderate toxic effects occurred in 12.4% and 3.5% of cases, respectively. Gastrointestinal (24.4%) and pulmonary (8.4%) symptoms predominated. No major outcomes or deaths were reported. Hurricane Sandy significantly increased gasoline exposures. While the majority of exposures were managed at home with minimum clinical toxicity, some patients experienced more severe symptoms. Disaster plans should incorporate public health messaging and regional PCCs for public health promotion and toxicological surveillance. 6. LDEO Carbonate Data - CaCO3 Percentages for 328 Sediments Cores, Principally from The Atlantic Ocean Spanning 100,000 to 200,000 Years bp Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — LDEO Carbonate data were compiled under the direction of A. Esmay and W.F. Ruddiman at the Lamont-Doherty Earth Observatory of Columbia University. Data include... 7. Rapid sedimentation of iron oxyhydroxides in an active hydrothermal shallow semi-enclosed bay at Satsuma Iwo-Jima Island, Kagoshima, Japan Science.gov (United States) Kiyokawa, Shoichi; Ueshiba, Takuya 2015-04-01 Hydrothermal activity is common in the fishing port of Nagahama Bay, a small semi-enclosed bay located on the southwest coast of Satsuma Iwo-Jima Island (38 km south of Kyushu Island, Japan). The bay contains red-brown iron oxyhydroxides and thick deposits of sediment. In this work, the high concentration and sedimentation rates of oxyhydroxide in this bay were studied and the sedimentary history was reconstructed. Since dredging work in 1998, a thickness of 1.0-1.5 m of iron oxyhydroxide-rich sediments has accumulated on the floor of the bay. To estimate the volume of iron oxyhydroxide sediments and the amount discharged from hydrothermal vents, sediment traps were operated for several years and 13 sedimentary core samples were collected to reconstruct the 10-year sedimentary history of Nagahama Bay. To confirm the timing of sedimentary events, the core data were compared with meteorological records obtained on the island, and the ages of characteristic key beds were thus identified. The sedimentation rate of iron oxyhydroxide mud was calculated, after correcting for sediment input from other sources. The sediments in the 13 cores from Nagahama Bay consist mainly of iron oxyhydroxide mud, three thick tephra beds, and a topmost thick sandy mud bed. Heavy rainfall events in 2000, 2001, 2002, and 2004-2005 coincide with tephra beds, which were reworked from Iwo-Dake ash deposits to form tephra-rich sediment. Strong typhoon events with gigantic waves transported outer-ocean-floor sediments and supplied quartz, cristobalite, tridymite, and albite sands to Nagahama Bay. These materials were redeposited together with bay sediments as the sandy mud bed. Based on the results from the sediment traps and cores, it is estimated that the iron oxyhydroxide mud accumulated in the bay at the relatively rapid rate of 33.3 cm/year (from traps) and 2.8-4.9 cm/year (from cores). The pore water contents within the sediment trap and core sediments are 73%-82% and 47%-67%, respectively 8. Characteristics of a sandy depositional lobe on the outer Mississippi fan from SeaMARC IA sidescan sonar images Science.gov (United States) Twichell, David C.; Schwab, William C.; Nelson, C. Hans; Kenyon, Neil H.; Lee, Homa J. 1992-01-01 SeaMARC IA sidescan sonar images of the distal reaches of a depositional lobe on the Mississippi Fan show that channelized rather than unconfined transport was the dominant transport mechanism for coarse-grained sediment during the formation of this part of the deep-sea fan. Overbank sheet flow of sands was not an important process in the transport and deposition of the sandy and silty sediment found on this fan. The dendritic distributary pattern and the high order of splaying of the channels, only one of which appears to have been active at a time, suggest that coarse-grained deposits on this fan are laterally discontinuous. 9. Radon emanation coefficients in sandy soils International Nuclear Information System (INIS) Holy, K.; Polaskova, A.; Baranova, A.; Sykora, I.; Hola, O. 1998-01-01 In this contribution the results of the study of an influence of the water content on the emanation coefficient for two sandy soil samples are reported. These samples were chosen on the because of the long-term continual monitoring of the 222 Rn concentration just in such types of soils and this radon concentration showed the significant variations during a year. These variations are chiefly given in connection with the soil moisture. Therefore, the determination of the dependence of the emanation coefficient of radon on the water content can help to evaluate the influence of the soil moisture variations of radon concentrations in the soil air. The presented results show that the emanation coefficient reaches the constant value in the wide interval of the water content for both sandy soil samples. Therefore, in the common range of the soil moisture (5 - 20 %) it is impossible to expect the variations of the radon concentration in the soil air due to the change of the emanation coefficient. The expressive changes of the radon concentration in the soil air can be observed in case of the significant decrease of the emanation coefficient during the soil drying when the water content decreases under 5 % or during the complete filling of the soil pores by the water. (authors) 10. Hurricane Sandy science plan: coastal impact assessments Science.gov (United States) Stronko, Jakob M. 2013-01-01 Hurricane Sandy devastated some of the most heavily populated eastern coastal areas of the Nation. With a storm surge peaking at more than 19 feet, the powerful landscape-altering destruction of Hurricane Sandy is a stark reminder of why the Nation must become more resilient to coastal hazards. In response to this natural disaster, the U.S. Geological Survey (USGS) received a total of$41.2 million in supplemental appropriations from the Department of the Interior (DOI) to support response, recovery, and rebuilding efforts. These funds support a science plan that will provide critical scientific information necessary to inform management decisions for recovery of coastal communities, and aid in preparation for future natural hazards. This science plan is designed to coordinate continuing USGS activities with stakeholders and other agencies to improve data collection and analysis that will guide recovery and restoration efforts. The science plan is split into five distinct themes: coastal topography and bathymetry, impacts to coastal beaches and barriers, impacts of storm surge, including disturbed estuarine and bay hydrology, impacts on environmental quality and persisting contaminant exposures, impacts to coastal ecosystems, habitats, and fish and wildlife. This fact sheet focuses assessing impacts to coastal beaches and barriers.
11. Numerical modeling of the effects of Hurricane Sandy and potential future hurricanes on spatial patterns of salt marsh morphology in Jamaica Bay, New York City
Science.gov (United States)
Wang, Hongqing; Chen, Qin; Hu, Kelin; Snedden, Gregg A.; Hartig, Ellen K.; Couvillion, Brady R.; Johnson, Cody L.; Orton, Philip M.
2017-03-29
The salt marshes of Jamaica Bay, managed by the New York City Department of Parks & Recreation and the Gateway National Recreation Area of the National Park Service, serve as a recreational outlet for New York City residents, mitigate flooding, and provide habitat for critical wildlife species. Hurricanes and extra-tropical storms have been recognized as one of the critical drivers of coastal wetland morphology due to their effects on hydrodynamics and sediment transport, deposition, and erosion processes. However, the magnitude and mechanisms of hurricane effects on sediment dynamics and associated coastal wetland morphology in the northeastern United States are poorly understood. In this study, the depth-averaged version of the Delft3D modeling suite, integrated with field measurements, was utilized to examine the effects of Hurricane Sandy and future potential hurricanes on salt marsh morphology in Jamaica Bay, New York City. Hurricane Sandy-induced wind, waves, storm surge, water circulation, sediment transport, deposition, and erosion were simulated by using the modeling system in which vegetation effects on flow resistance, surge reduction, wave attenuation, and sedimentation were also incorporated. Observed marsh elevation change and accretion from a rod surface elevation table and feldspar marker horizons and cesium-137- and lead-210-derived long-term accretion rates were used to calibrate and validate the wind-waves-surge-sediment transport-morphology coupled model.The model results (storm surge, waves, and marsh deposition and erosion) agreed well with field measurements. The validated modeling system was then used to detect salt marsh morphological change due to Hurricane Sandy across the entire Jamaica Bay over the short-term (for example, 4 days and 1 year) and long-term (for example, 5 and 10 years). Because Hurricanes Sandy (2012) and Irene (2011) were two large and destructive tropical cyclones which hit the northeast coast, the validated coupled
12. Correlation between landscape fragmentation and sandy desertification: a case study in Horqin Sandy Land, China.
Science.gov (United States)
Ge, Xiaodong; Dong, Kaikai; Luloff, A E; Wang, Luyao; Xiao, Jun; Wang, Shiying; Wang, Qian
2016-01-01
The exact roles of landscape fragmentation on sandy desertification are still not fully understood, especially with the impact of different land use types in spatial dimension. Taking patch size and shape into consideration, this paper selected the Ratio of Patch Size and the Fractal Dimension Index to establish a model that reveals the association between the area of bare sand land and the fragmentation of different land use types adjacent to bare sand land. Results indicated that (1) grass land and arable land contributed the most to landscape fragmentation processes in the regions adjacent to bare sand land during the period 1980 to 2010. Grass land occupied 54 % of the region adjacent to bare sand land in 1980. The Ratio of Patch Size of grass land decreased from 1980 to 2000 and increased after 2000. The Fractal Dimension Index of grass increased during the period 1980 to 1990 and decreased after 1990. Arable land expanded significantly during this period. The Ratio of Patch Size of arable land increased from 1980 to 1990 and decreased since 1990. The Fractal Dimension Index of arable land increased from 1990 to 2000 and decreased after 2000. (2) The Ratio of Patch Size and the Fractal Dimension Index were significantly related to the area of bare sand land. The role of landscape fragmentation was not linear to sandy desertification. There were both positive and negative effects of landscape fragmentation on sandy desertification. In 1980, the Ratio of Patch Size and the Fractal Dimension Index were negatively related to the area of bare sand land, showing that the landscape fragmentation and regularity of patches contributed to the expansion of sandy desertification. In 1990, 2000, and 2010, the Ratio of Patch Size and the Fractal Dimension Index were mostly positively related to the area of bare sand land, showing the landscape fragmentation and regularity of patches contributed to the reversion of sandy desertification in this phase. The absolute values of
13. Seagrasses and sediment response to changing physical forcing in a coastal lagoon
Directory of Open Access Journals (Sweden)
J. Figueiredo da Silva
2004-01-01
Full Text Available The Ria de Aveiro is an estuary–coastal lagoon system connected to the Atlantic Ocean by a channel with a cross-sectional area that, for more than a century, has increased steadily, partly because of dredging over the last 50 years. Local ocean tides, with amplitudes of up to 3 m, are today transmitted to the lagoon by the single, engineered inlet channel and propagate to the end of the lagoon channels as a damped progressive wave. The increase in tidal amplitude with time has affected the lagoon ecosystem and the water has become more saline. Seagrass beds are important indicators of ecosystem change; until 1980, much of the lagoon bed was covered by seagrasses (Zostera, Ruppia, Potamogeton, which were collected in large quantities for use in agriculture. After 1960, the harvesting declined and the seagrass beds became covered in sediment, so that the area of seagrasses decreased substantially despite the decline in the quantity collected. The change in the pattern of seagrass populations can be related to changes in the physical forcing associated with increased tidal wave penetration. This has, in turn, induced transport and redistribution of coarser, sandy sediment and increased re-suspension and turbidity in the water column. However, the initiating cause for this ecosystem change was dredging, which, since the 1950s, has been used increasingly to widen and deepen the channels of the system.
14. Event sedimentation in low-latitude deep-water carbonate basins, Anegada passage, northeast Caribbean
Science.gov (United States)
Chaytor, Jason D.; ten Brink, Uri S.
2015-01-01
The Virgin Islands and Whiting basins in the Northeast Caribbean are deep, structurally controlled depocentres partially bound by shallow-water carbonate platforms. Closed basins such as these are thought to document earthquake and hurricane events through the accumulation of event layers such as debris flow and turbidity current deposits and the internal deformation of deposited material. Event layers in the Virgin Islands and Whiting basins are predominantly thin and discontinuous, containing varying amounts of reef- and slope-derived material. Three turbidites/sandy intervals in the upper 2 m of sediment in the eastern Virgin Islands Basin were deposited between ca. 2000 and 13 600 years ago, but do not extend across the basin. In the central and western Virgin Islands Basin, a structureless clay-rich interval is interpreted to be a unifite. Within the Whiting Basin, several discontinuous turbidites and other sand-rich intervals are primarily deposited in base of slope fans. The youngest of these turbidites is ca. 2600 years old. Sediment accumulation in these basins is low (−1) for basin adjacent to carbonate platform, possibly due to limited sediment input during highstand sea-level conditions, sediment trapping and/or cohesive basin walls. We find no evidence of recent sediment transport (turbidites or debris flows) or sediment deformation that can be attributed to the ca. M7.2 1867 Virgin Islands earthquake whose epicentre was located on the north wall of the Virgin Islands Basin or to recent hurricanes that have impacted the region. The lack of significant appreciable pebble or greater size carbonate material in any of the available cores suggests that submarine landslide and basin-wide blocky debris flows have not been a significant mechanism of basin margin modification in the last several thousand years. Thus, basins such as those described here may be poor recorders of past natural hazards, but may provide a long-term record of past oceanographic
15. Nitrate reduction in an unconfined sandy aquifer
DEFF Research Database (Denmark)
Postma, Diederik Jan; Boesen, Carsten; Kristiansen, Henning
1991-01-01
of total dissolved ions in the NO3- free anoxic zone indicates the downward migration of contaminants and that active nitrate reduction is taking place. Nitrate is apparently reduced to N2 because both nitrite and ammonia are absent or found at very low concentrations. Possible electron donors......Nitrate distribution and reduction processes were investigated in an unconfined sandy aquifer of Quaternary age. Groundwater chemistry was studied in a series of eight multilevel samplers along a flow line, deriving water from both arable and forested land. Results show that plumes of nitrate...... processes of O2 and NO3- occur at rates that are fast compared to the rate of downward water transport. Nitrate-contaminated groundwater contains total contents of dissolved ions that are two to four times higher than in groundwater derived from the forested area. The persistence of the high content...
16. Contribution of phytoplankton and benthic microalgae to inner shelf sediments of the north-central Gulf of Mexico
Science.gov (United States)
Grippo, M. A.; Fleeger, J. W.; Rabalais, N. N.; Condrey, R.; Carman, K. R.
2010-03-01
Marine sediment may contain both settled phytoplankton and benthic microalgae (BMA). In river-dominated, shallow continental shelf systems, spatial, and temporal heterogeneity in sediment type and water-column characteristics (e.g., turbidity and primary productivity) may promote spatial variation in the relative contribution of these two sources to the sediment organic matter pool available to benthic consumers. Here we use photosynthetic pigment analysis and microscopic examination of sediment microalgae to investigate how the biomass, composition, and degradation state of sediment-associated microalgae vary along the Louisiana (USA) inner shelf, a region strongly influenced by the Mississippi River. Three sandy shoals and surrounding muddy sediments with depths ranging from 4 to 20 m were sampled in April, August, and October 2007. Pigment composition suggested that sediment microalgae were primarily diatoms at all locations. We found no significant differences in sediment chlorophyll a concentrations (8-77 mg m -2) at the shoal and off-shoal stations. Epipelic pennate diatoms (considered indicative of BMA) made up a significantly greater proportion of sediment diatoms at sandy (50-98%) compared to more silty off-shoal stations (16-56%). The percentage of centric diatoms (indicators of settled phytoplankton) in the sediment was highest in August. Sediment total pheopigment concentrations on sandy stations (40 mg m -2), suggesting differences in sediment microalgal degradation state. These observations suggest that BMA predominate in shallow sandy sediments and that phytodetritus predominates at muddy stations. Our results also suggest that the relative proportion of phytodetritus in the benthos was highest where phytoplankton biomass in the overlying water was greatest, independent of sediment type. The high biomass of BMA found on shoals suggests that benthic primary production on sandy sediments represents a potentially significant local source of sediment
17. Tracing time in the ocean: Unraveling depositional and preservational timescales using compound-specific radiocarbon analysis of biomarkers from marine sediments
OpenAIRE
Kusch, Stephanie
2010-01-01
Carbon cycle dynamics between the different inorganic and organic carbon pools play an important role in controlling the atmospheric chemical composition, thus, regulating the Earth’s climate. Atmospheric CO2 is fixed into biomass by photosynthesis of terrestrial and marine primary producers. Until final burial in marine sediments, the biologically fixed carbon that escapes remineralisation undergoes exchange between various active carbon reservoirs. Until now, the timescales o...
18. Spatial distribution of organochlorine contaminants in soil, sediment, and fish in Bikini and Enewetak Atolls of the Marshall Islands, Pacific Ocean.
Science.gov (United States)
Wang, Jun; Caccamise, Sarah A L; Wu, Liejun; Woodward, Lee Ann; Li, Qing X
2011-08-01
Several nuclear tests were performed at Enewetak and Bikini Atolls in the Marshall Islands between 1946 and 1958. The events at Bikini Atoll involved several ships that were tested for durability during nuclear explosions, and 24 vessels now rest on the bottom of the Bikini lagoon. Nine soil samples were collected from different areas on the two islands of the atoll, and eighteen sediment, nine fish, and one lobster were collected in the vicinity of the sunken ships. Organochlorine pesticides (OCPs), polychlorinated biphenyls (PCBs), and polychlorinated terphenyls (PCTs) in these samples were analyzed using gas chromatography/ion trap mass spectrometry (GC/ITMS). The average recoveries ranged from 78% to 104% for the different PCB congeners. The limits of detection (LOD) for PCBs, PCTs, DDE, DDT, and dieldrin ranged 10-50 pg g(-1). Some fish from Enewetak contained PCBs at a concentration range of 37-137 ng g(-1), dry weight (dw), and most of the soils from Enewetak showed evidence of PCBs (22-392 ng g(-1)dw). Most of the Bikini lagoon sediment samples contained PCBs, and the highest was the one collected from around the Saratoga, an aircraft carrier (1555 ng g(-1)dw). Some of the fish samples, most of the soil samples, and only one of the sediment samples contained 2,2-bis(4-chlorophenyl)-1,1-dichloroethylene (DDE) and PCBs. In addition to PCBs, the soils from Enewetak Atoll contained PCTs. PCTs were not detected in the sediment samples from Bikini Atoll. The results suggest local pollution sources of PCBs, PCTs, and OCPs. Copyright © 2011. Published by Elsevier Ltd.
19. Formation of Mg-aluminosilicates During Early Diagenesis of Carbonate Sediments in the Volcanic Crater Lake of Dziani Dzaha (Mayotte - Indian Ocean)
Science.gov (United States)
Milesi, V. P.; Jezequel, D.; Debure, M.; Marty, N.; Guyot, F. J.; Claret, F.; Virgone, A.; Gaucher, E.; Ader, M.
2017-12-01
Authigenic clays are increasingly reported in ancient carbonate rocks, but their origin remains poorly understood, strongly limiting paleoenvironmental interpretations. To tackle this issue, the carbonate sediments of the volcanic crater lake Dziani Dzaha are studied and reactive transport modeling is performed to assess the processes originating carbonate sediments associated with Mg-rich silicates during early diagenesis. The Dziani Dzaha is characterized by CO2-rich gases bubbling in three different locations, a high primary productivity leading to organic carbon contents of up to 30wt.% in the sediment, an alkalinity of 0.26 molal in the water column and pH values of 9 to 9.5. Characterization of bulk samples and clay fraction (fueled by inputs of CO2-rich volcanic gases, which generates high pH, promoting the formation of saponite, aragonite and hydromagnesite, which precipitates at first before being destabilized at depth due to organic matter mineralization. The observed carbon cycle, influenced by volcanic gases, may thus play a key role in the development of carbonate rocks associated with Mg-silicates.
20. Discharge controls on the sediment and dissolved nutrient transport flux of the lowermost Mississippi River: Implications for export to the ocean and for delta restoration
Science.gov (United States)
2017-12-01
Lagrangian longitudinal surveys and fixed station data are utilized from the lowermost Mississippi River reach in Louisiana at high and low discharge in 2012-2013 to examine the changing stream power, sediment transport capacity, and nitrate conveyance in this backwater reach of the river. Nitrate appears to remain conservative through the backwater reach at higher discharges (>15,000 m3/s), thus, nitrate levels supplied from the catchment are those exported to the Gulf of Mexico, fueling coastal hypoxia. At lower discharges, interaction with fine sediments and organic matter stored on the bed due to estuarine and tidal processes, likely elevates nitrate levels prior to entering the Gulf: a further 1-2 week long spike in nitrate concentrations is associated with the remobilization of this sediments during the rising discharge phase of the Mississippi. Backwater characteristics are clearly observed in the study reach starting at river kilometer 703 (Vicksburg) in both longitudinal study periods. Stream power at the lowermost station is only 16% of that at Vicksburg in the high discharge survey, and 0.6% at low flow. The high-to-low discharge study differential in unit stream power at a station increases between Vicksburg and the lowermost station from a factor of 3 to 47-50 times. At high discharge, ∼30% of this energy loss can be ascribed to the removal of water to the Atchafalaya at Old River Control. Suspended sediment flux decreases downstream in the studied reach in both studies: the lowermost station has 75% of the flux at Vicksburg in the high discharge study, and 0.9% in the low discharge study. The high discharge values, given that this study was conducted during the highest rising hydrograph of the water year, are augmented by sediment resuspended from the bed that was deposited in the previous low discharge phase. Examination of this first detailed field observation studies of the backwater phenomenon in a major river, shows that observed suspended
1. Geomorphic response of the Sandy River, Oregon, to removal of Marmot Dam
Science.gov (United States)
Major, Jon J.; O'Connor, Jim E.; Podolak, Charles J.; Keith, Mackenzie K.; Grant, Gordon E.; Spicer, Kurt R.; Pittman, Smokey; Bragg, Heather M.; Wallick, J. Rose; Tanner, Dwight Q.; Rhode, Abagail; Wilcock, Peter R.
2012-01-01
The October 2007 breaching of a temporary cofferdam constructed during removal of the 15-meter (m)-tall Marmot Dam on the Sandy River, Oregon, triggered a rapid sequence of fluvial responses as ~730,000 cubic meters (m3) of sand and gravel filling the former reservoir became available to a high-gradient river. Using direct measurements of sediment transport, photogrammetry, airborne light detection and ranging (lidar) surveys, and, between transport events, repeat ground surveys of the reservoir reach and channel downstream, we monitored the erosion, transport, and deposition of this sediment in the hours, days, and months following breaching of the cofferdam. Rapid erosion of reservoir sediment led to exceptional suspended-sediment and bedload-sediment transport rates near the dam site, as well as to elevated transport rates at downstream measurement sites in the weeks and months after breaching. Measurements of sediment transport 0.4 kilometers (km) downstream of the dam site during and following breaching show a spike in the transport of fine suspended sediment within minutes after breaching, followed by high rates of suspended-load and bedload transport of sand. Significant transport of gravel bedload past the measurement site did not begin until 18 to 20 hours after breaching. For at least 7 months after breaching, bedload transport rates just below the dam site during high flows remained as much as 10 times above rates measured upstream of the dam site and farther downstream. The elevated sediment load was derived from eroded reservoir sediment, which began eroding when a meters-tall knickpoint migrated about 200 m upstream in the first hour after breaching. Rapid knickpoint migration triggered vertical incision and bank collapse in unconsolidated sand and gravel, leading to rapid channel widening. Over the following days and months, the knickpoint migrated upstream more slowly, simultaneously decreasing in height and becoming less distinct. Within 7 months
2. Landscape Visual Quality and Meiofauna Biodiversity on Sandy Beaches
Science.gov (United States)
Felix, Gabriela; Marenzi, Rosemeri C.; Polette, Marcos; Netto, Sérgio A.
2016-10-01
Sandy beaches are central economic assets, attracting more recreational users than other coastal ecosystems. However, urbanization and landscape modification can compromise both the functional integrity and the attractiveness of beach ecosystems. Our study aimed at investigating the relationship between sandy beach artificialization and the landscape perception by the users, and between sandy beach visual attractiveness and biodiversity. We conducted visual and biodiversity assessments of urbanized and semiurbanized sandy beaches in Brazil and Uruguay. We specifically examined meiofauna as an indicator of biodiversity. We hypothesized that urbanization of sandy beaches results in a higher number of landscape detractors that negatively affect user evaluation, and that lower-rated beach units support lower levels of biodiversity. We found that urbanized beach units were rated lower than semiurbanized units, indicating that visual quality was sensitive to human interventions. Our expectations regarding the relationship between landscape perception and biodiversity were only partially met; only few structural and functional descriptors of meiofauna assemblages differed among classes of visual quality. However, lower-rated beach units exhibited signs of lower environmental quality, indicated by higher oligochaete densities and significant differences in meiofauna structure. We conclude that managing sandy beaches needs to advance beyond assessment of aesthetic parameters to also include the structure and function of beach ecosystems. Use of such supporting tools for managing sandy beaches is particularly important in view of sea level rise and increasing coastal development.
3. A feasibility study of the disposal of radioactive waste in deep ocean sediments by drilled emplacement: 1. A review of alternatives
International Nuclear Information System (INIS)
1983-01-01
This report describes the first stage of an engineering study of the disposal of high level radioactive waste in holes formed deep in the ocean floor. In this phase, the emphasis has been on establishing reference criteria, assessing the problems and evaluating potential solutions. The report concludes that there are no aspects that appear technically infeasible, but questions of safety and reliability of certain aspects require further investigation. (author)
4. Ejecta from Ocean Impacts
Science.gov (United States)
Kyte, Frank T.
2003-01-01
Numerical simulations of deep-ocean impact provide some limits on the size of a projectile that will not mix with the ocean floor during a deep-ocean impact. For a vertical impact at asteroidal velocities (approx. 20 km/s), mixing is only likely when the projectile diameter is greater than 112 of the water depth. For oblique impacts, even larger projectiles will not mix with ocean floor silicates. Given the typical water depths of 4 to 5 km in deep-ocean basins, asteroidal projectiles with diameters as large as 2 or 3 km may commonly produce silicate ejecta that is composed only of meteoritic materials and seawater salts. However, the compressed water column beneath the projectile can still disrupt and shock metamorphose the ocean floor. Therefore, production of a separate, terrestrial ejecta component is not ruled out in the most extreme case. With increasing projectile size (or energy) relative to water depths, there must be a gradation between oceanic impacts and more conventional continental impacts. Given that 60% of the Earth's surface is covered by oceanic lithosphere and 500 m projectiles impact the Earth on 10(exp 5) y timescales, there must be hundreds of oceanic impact deposits in the sediment record awaiting discovery.
5. Eighty years of food-web response to interannual variation in discharge recorded in river diatom frustules from an ocean sediment core.
Science.gov (United States)
Sculley, John B; Lowe, Rex L; Nittrouer, Charles A; Drexler, Tina M; Power, Mary E
2017-09-19
Little is known about the importance of food-web processes as controls of river primary production due to the paucity of both long-term studies and of depositional environments which would allow retrospective fossil analysis. To investigate how freshwater algal production in the Eel River, northern California, varied over eight decades, we quantified siliceous shells (frustules) of freshwater diatoms from a well-dated undisturbed sediment core in a nearshore marine environment. Abundances of freshwater diatom frustules exported to Eel Canyon sediment from 1988 to 2001 were positively correlated with annual biomass of Cladophora surveyed over these years in upper portions of the Eel basin. Over 28 years of contemporary field research, peak algal biomass was generally higher in summers following bankfull, bed-scouring winter floods. Field surveys and experiments suggested that bed-mobilizing floods scour away overwintering grazers, releasing algae from spring and early summer grazing. During wet years, growth conditions for algae could also be enhanced by increased nutrient loading from the watershed, or by sustained summer base flows. Total annual rainfall and frustule densities in laminae over a longer 83-year record were weakly and negatively correlated, however, suggesting that positive effects of floods on annual algal production were primarily mediated by "top-down" (consumer release) rather than "bottom-up" (growth promoting) controls.
6. In situ microscale variation in distribution and consumption of O2: A case study from adeep ocean margin sediment (Sagami Bay, Japan)
DEFF Research Database (Denmark)
Glud, Ronnie Nøhr; Stahl, Henrik; Berg, Peter
2009-01-01
A transecting microprofiler documented a pronounced small-scale variation in the benthic O2 concentration at 1450-m water depth (Sagami Bay, Japan). Data obtained during a single deployment revealed that within a sediment area of 190 cm2 the O2 penetration depth varied from 2.6 mm to 17.8 mm...... increased the average diffusive O2 uptake by a factor of 1.26 6 0.06. Detailed 2D calculations on the volume-specific O2 consumption exhibited high variability. The oxic zone was characterized by a mosaic of sediment parcels with markedly different activity levels. Millimeter- to centimeter-sized ‘‘hot...... spots’’ with O2 consumption rates up to 10 pmol cm23 s21 were separated by parcels of low or insignificant O2 consumption. The variation in aerobic activity must reflect an inhomogeneous distribution of electron donors and suggests that the turnover of material within the oxic zone to a large extent...
7. Eighty years of food-web response to interannual variation in discharge recorded in river diatom frustules from an ocean sediment core
Science.gov (United States)
Sculley, John B.; Lowe, Rex L.; Nittrouer, Charles A.; Drexler, Tina M.; Power, Mary E.
2017-01-01
Little is known about the importance of food-web processes as controls of river primary production due to the paucity of both long-term studies and of depositional environments which would allow retrospective fossil analysis. To investigate how freshwater algal production in the Eel River, northern California, varied over eight decades, we quantified siliceous shells (frustules) of freshwater diatoms from a well-dated undisturbed sediment core in a nearshore marine environment. Abundances of freshwater diatom frustules exported to Eel Canyon sediment from 1988 to 2001 were positively correlated with annual biomass of Cladophora surveyed over these years in upper portions of the Eel basin. Over 28 years of contemporary field research, peak algal biomass was generally higher in summers following bankfull, bed-scouring winter floods. Field surveys and experiments suggested that bed-mobilizing floods scour away overwintering grazers, releasing algae from spring and early summer grazing. During wet years, growth conditions for algae could also be enhanced by increased nutrient loading from the watershed, or by sustained summer base flows. Total annual rainfall and frustule densities in laminae over a longer 83-year record were weakly and negatively correlated, however, suggesting that positive effects of floods on annual algal production were primarily mediated by “top-down” (consumer release) rather than “bottom-up” (growth promoting) controls. PMID:28874576
8. Radiological assessment of the disposal of high level radioactive waste on or within the sediments of the deep ocean bed: v. 1
International Nuclear Information System (INIS)
Kane, P.
1987-11-01
The contract report comprises a main report accompanied by three volumes detailing the probabilistic risk assessments carried out for each proposed mode of HLW emplacement. Following a section describing the methodology employed, the models developed for and used in the assessment are described. Aspects of design, testing and calibration are covered. The data employed are described in relation to components of the disposal system, giving sources and reasons for the distribution used. Uncertainties in model predictions are examined in relation to their origin. Detailed results are presented which illustrate the transport behaviour of radionuclides in deep ocean environments. Conclusions are drawn and recommendations made for further research. (author)
9. Constraints in using Cerium-animaly of bulk sediments as an indicator of paleo bottom water redox environment: A case study from the Central Indian Ocean Basin
Digital Repository Service at National Institute of Oceanography (India)
Pattan, J.N.; Pearce, N.J.G.; Mislankar, P.G.
for paleo-oceanic redox conditions. Geochem. Cosmo- chim. Acta 5, 1361–1371. Lyle, M., Dymond, J., Heath, G.R., 1977. Copper–nickel enriched ferromanganese nodules and associated crusts from the Baur Basin, northwest Nazca plate. Earth Planet. Sci. Lett. 35.... Cosmo- chim. Acta 61, 2375–2388. Nozaki, Y., Horible, Y., Tsubota, H., 1981. The water column distribution of thorium isotopes in the western North Pacific. Earth Planet. Sci. Lett. 66, 73–90. Pattan, J.N., Banakar, V.K., 1997. Diagenetic remobilization...
10. Macrofauna and meiofauna of two sandy beaches at Mombasa, Kenya
Digital Repository Service at National Institute of Oceanography (India)
Ansari, Z.A.; Ingole, B.S.; Parulekar, A.H.
Macrofauna and meiofauna of 2 sandy beaches having medium and fine sand particles, respectively, were investigated, quantitatively Macrofauna density was highest around high water mark and progressively decreased towards low water mark Meiofauna...
11. Studies on Thiobacilli spp. isolated from sandy beaches of Kerala
Digital Repository Service at National Institute of Oceanography (India)
Gore, P.S.; Raveendran, O.; Unnithan, R.V.
Occurrence, isolation and oxidative activity of Thiobacilli spp. from some sandy beaches of Kerala are reported. These organisms were encountered in polluted beaches and were dominant during monsoon in all the beaches...
12. Model projections of atmospheric steering of Sandy-like superstorms.
Science.gov (United States)
Barnes, Elizabeth A; Polvani, Lorenzo M; Sobel, Adam H
2013-09-17
Superstorm Sandy ravaged the eastern seaboard of the United States, costing a great number of lives and billions of dollars in damage. Whether events like Sandy will become more frequent as anthropogenic greenhouse gases continue to increase remains an open and complex question. Here we consider whether the persistent large-scale atmospheric patterns that steered Sandy onto the coast will become more frequent in the coming decades. Using the Coupled Model Intercomparison Project, phase 5 multimodel ensemble, we demonstrate that climate models consistently project a decrease in the frequency and persistence of the westward flow that led to Sandy's unprecedented track, implying that future atmospheric conditions are less likely than at present to propel storms westward into the coast.
13. On the Impact Angle of Hurricane Sandy's New Jersey Landfall
Science.gov (United States)
Hall, Timothy M.; Sobel, Adam H.
2013-01-01
Hurricane Sandy's track crossed the New Jersey coastline at an angle closer to perpendicular than any previous hurricane in the historic record, one of the factors contributing to recordsetting peak-water levels in parts of New Jersey and New York. To estimate the occurrence rate of Sandy-like tracks, we use a stochastic model built on historical hurricane data from the entire North Atlantic to generate a large sample of synthetic hurricanes. From this synthetic set we calculate that under long-term average climate conditions, a hurricane of Sandy's intensity or greater (category 1+) makes NJ landfall at an angle at least as close to perpendicular as Sandy's at an average annual rate of 0.0014 yr-1 (95% confidence range 0.0007 to 0.0023); i.e., a return period of 714 years (95% confidence range 435 to 1429).
14. Reservoir architecture patterns of sandy gravel braided distributary channel
Directory of Open Access Journals (Sweden)
Senlin Yin
2016-06-01
Full Text Available The purpose of this study was to discuss shape, scale and superimposed types of sandy gravel bodies in sandy-gravel braided distributary channel. Lithofacies analysis, hierarchy bounding surface analysis and subsurface dense well pattern combining with outcrops method were used to examine reservoir architecture patterns of sandy gravel braided distributary channel based on cores, well logging, and outcrops data, and the reservoir architecture patterns of sandy gravel braided distributary channels in different grades have been established. The study shows: (1 The main reservoir architecture elements for sandy gravel braided channel delta are distributary channel and overbank sand, while reservoir flow barrier elements are interchannel and lacustrine mudstone. (2 The compound sand bodies in the sandy gravel braided delta distributary channel take on three shapes: sheet-like distributary channel sand body, interweave strip distributary channel sand body, single strip distributary channel sand body. (3 Identification marks of single distributary channel include: elevation of sand body top, lateral overlaying, “thick-thin-thick” feature of sand bodies, interchannel mudstone and overbank sand between distributary channels and the differences in well log curve shape of sand bodies. (4 Nine lithofacies types were distinguished in distributary channel unit interior, different channel units have different lithofacies association sequence.
15. Possible Late Pleistocene volcanic activity on Nightingale Island, South Atlantic Ocean, based on geoelectrical resistivity measurements, sediment corings and 14C dating
DEFF Research Database (Denmark)
Bjørk, Anders Anker; Björck, Svante; Cronholm, Anders
2011-01-01
. The irregular shapes of the basins and the lack of clear erosional features indicate that they are not eruption craters and were not formed by erosion. Instead, we regard them as morphological depressions formed between ridges of trachytic lava flows and domes at a late stage of the formation of the volcanic...... edifice. The onset of sedimentation within these basins appears to have occurred between 24 and 37 ka with the highest situated wetland yielding the highest ages. These ages are very young compared to the timing of the main phase of the formation of the island, implying volcanic activity on the island......Tristan da Cunha is a volcanic island group situated in the central South Atlantic. The oldest of these islands, Nightingale Island, has an age of about 18Ma. In the interior of the island, there are several wetlands situated in topographic depressions. The ages of these basins have been unknown...
16. Human threats to sandy beaches: A meta-analysis of ghost crabs illustrates global anthropogenic impacts.
Science.gov (United States)
Schlacher, Thomas A.; Lucrezi, Serena; Connolly, Rod M.; Peterson, Charles H.; Gilby, Ben L.; Maslo, Brooke; Olds, Andrew D.; Walker, Simon J.; Leon, Javier X.; Huijbers, Chantal M.; Weston, Michael A.; Turra, Alexander; Hyndes, Glenn A.; Holt, Rebecca A.; Schoeman, David S.
2016-02-01
Beach and coastal dune systems are increasingly subjected to a broad range of anthropogenic pressures that on many shorelines require significant conservation and mitigation interventions. But these interventions require reliable data on the severity and frequency of adverse ecological impacts. Such evidence is often obtained by measuring the response of 'indicator species'. Ghost crabs are the largest invertebrates inhabiting tropical and subtropical sandy shores and are frequently used to assess human impacts on ocean beaches. Here we present the first global meta-analysis of these impacts, and analyse the design properties and metrics of studies using ghost-crabs in their assessment. This was complemented by a gap analysis to identify thematic areas of anthropogenic pressures on sandy beach ecosystems that are under-represented in the published literature. Our meta-analysis demonstrates a broad geographic reach, encompassing studies on shores of the Pacific, Indian, and Atlantic Oceans, as well as the South China Sea. It also reveals what are, arguably, two major limitations: i) the near-universal use of proxies (i.e. burrow counts to estimate abundance) at the cost of directly measuring biological traits and bio-markers in the organism itself; and ii) descriptive or correlative study designs that rarely extend beyond a simple 'compare and contrast approach', and hence fail to identify the mechanistic cause(s) of observed contrasts. Evidence for a historically narrow range of assessed pressures (i.e., chiefly urbanisation, vehicles, beach nourishment, and recreation) is juxtaposed with rich opportunities for the broader integration of ghost crabs as a model taxon in studies of disturbance and impact assessments on ocean beaches. Tangible advances will most likely occur where ghost crabs provide foci for experiments that test specific hypotheses associated with effects of chemical, light and acoustic pollution, as well as the consequences of climate change (e
17. Estimation of sediment properties during benthic impact experiments
Digital Repository Service at National Institute of Oceanography (India)
Yamazaki, T.; Sharma, R
Sediment properties, such as water content and density, have been used to estimate the dry and wet weights, as well as the volume of sediment recovered and discharged, during benthic impact experiments conducted in the Pacific and Indian Oceans...
18. Studies on the shelf sediments off the Madras coast
Digital Repository Service at National Institute of Oceanography (India)
Rao, Ch.M.; Murty, P.S.N.
content. Grain size study has shown that the sediments off Madras are mainly sandy in nature and vary from fine to very fine sands in the nearshore and outer shelf regions to medium to coarse sands in the midshelf region. Off Karaikal they vary from coarse...
19. Respirable dust and quartz exposure from three South African farms with sandy, sandy loam, and clay soils.
Science.gov (United States)
Swanepoel, Andrew J; Kromhout, Hans; Jinnah, Zubair A; Portengen, Lützen; Renton, Kevin; Gardiner, Kerry; Rees, David
2011-07-01
To quantify personal time-weighted average respirable dust and quartz exposure on a sandy, a sandy loam, and a clay soil farm in the Free State and North West provinces of South Africa and to ascertain whether soil type is a determinant of exposure to respirable quartz. Three farms, located in the Free State and North West provinces of South Africa, had their soil type confirmed as sandy, sandy loam, and clay; and, from these, a total of 298 respirable dust and respirable quartz measurements were collected between July 2006-November 2009 during periods of major farming operations. Values below the limit of detection (LOD) (22 μg · m(-3)) were estimated using multiple 'imputation'. Non-parametric tests were used to compare quartz exposure from the three different soil types. Exposure to respirable quartz occurred on all three farms with the highest individual concentration measured on the sandy soil farm (626 μg · m(-3)). Fifty-seven, 59, and 81% of the measurements on the sandy soil, sandy loam soil, and clay soil farm, respectively, exceeded the American Conference of Governmental Industrial Hygienists (ACGIH) threshold limit value (TLV) of 25 μg · m(-3). Twelve and 13% of respirable quartz concentrations exceeded 100 μg · m(-3) on the sandy soil and sandy loam soil farms, respectively, but none exceeded this level on the clay soil farm. The proportions of measurements >100 μg · m(-3) were not significantly different between the sandy and sandy loam soil farms ('prop.test'; P = 0.65), but both were significantly larger than for the clay soil farm ('prop.test'; P = 0.0001). The percentage of quartz in respirable dust was determined for all three farms using measurements > the limit of detection. Percentages ranged from 0.5 to 94.4% with no significant difference in the median quartz percentages across the three farms (Kruskal-Wallis test; P = 0.91). This study demonstrates that there is significant potential for over-exposure to respirable quartz in
20. Extension of 239+240Pu sediment geochronology to coarse-grained marine sediments
Science.gov (United States)
Kuehl, Steven A.; Ketterer, Michael E.; Miselis, Jennifer L.
2012-01-01
Sediment geochronology of coastal sedimentary environments dominated by sand has been extremely limited because concentrations of natural and bomb-fallout radionuclides are often below the limit of measurement using standard techniques. ICP-MS analyses of 239+240Pu from two sites representative of traditionally challenging (i.e., low concentration) environments provide a "proof of concept" and demonstrate a new application for bomb-fallout radiotracers in the study of sandy shelf-seabed dynamics. A kasten core from the New Zealand shelf in the Southern Hemisphere (low fallout), and a vibracore from the sandy nearshore of North Carolina (low particle surface area) both reveal measurable 239+240Pu activities at depth. In the case of the New Zealand site, independently verified steady-state sedimentation results in a 239+240Pu profile that mimics the expected atmospheric fallout. The depth profile of 239+240Pu in the North Carolina core is more uniform, indicating significant sediment resuspension, which would be expected in this energetic nearshore environment. This study, for the first time, demonstrates the utility of 239+240Pu in the study of sandy environments, significantly extending the application of bomb-fallout isotopes to coarse-grained sediments, which compose the majority of nearshore regions.
1. Brazilian sandy beach macrofauna production: a review
Directory of Open Access Journals (Sweden)
Marcelo Petracco
2012-12-01
Full Text Available The state of the art of the studies on the production of Brazilian sandy beach macrofauna was analyzed on the basis of the data available in the literature. For this purpose, the representativeness of the production dataset was examined by latitudinal distribution, degree of exposure and morphodynamic state of beaches, taxonomic groups, and methods employed. A descriptive analysis was, further, made to investigate the trends in production of the more representative taxonomic groups and species of sandy beach macrofauna. A total of 69 macrofauna annual production estimates were obtained for 38 populations from 25 studies carried out between 22º56'S and 32º20'S. Production estimates were restricted to populations on beaches located on the southern and southeastern Brazilian coast. Most of the populations in the dataset inhabit exposed dissipative sandy beaches and are mainly represented by mollusks and crustaceans, with a smaller number of polychaetes. The trends in production among taxonomic groups follow a similar pattern to that observed on beaches throughout the world, with high values for bivalves and decapods. The high turnover rate (P/B ratio of the latter was due to the presence of several populations of the mole crab Emerita brasiliensis, which can attain high values of productivity, in the dataset. Most of the studies focus on the comparison of production and, especially, of P/B ratio according to life history traits in populations of the same species/taxonomic group. Despite the importance of life history-production studies, other approaches, such as the effect of man-induce disturbances on the macrofauna, should be undertaken in these threatened environments.O estado da arte dos estudos de produção da macrofauna de praias arenosas brasileiras foi analisado a partir de informações disponíveis na literatura. Para essa finalidade, a representatividade dos dados de produção foi examinada de acordo com a distribuição latitudinal
2. Sediment Transport and Slope Stability of Ship Shoal Borrow Areas for Coastal Restoration of Louisiana
Science.gov (United States)
Liu, H.; Xu, K.; Bentley, S. J.; Li, C.; Miner, M. D.; Wilson, C.; Xue, Z.
2017-12-01
Sandy barrier islands along Louisiana coast are degrading rapidly due to both natural and anthropogenic factors. Ship Shoal is one of the largest offshore sand resources, and has been used as a borrow area for Caminada Headland Restoration Project. Our knowledge of sediment transport and infilling processes in this new sandy and dynamic borrow area is rather limited. High resolution sub-bottom seismic data, side scan sonar images, multi-beam bathymetry and laser sediment grain size data were used to study seafloor morphological evolution and pit wall stability in response to both physical and geological processes. The multi-beam bathymetry and seismic profiling inside the pit showed that disequilibrium conditions led to rapid infilling in the pits at the beginning, but this process slowed down after the pit slope became stable and topography became smooth. We hypothesize that the erosion of the adjacent seabed sediment by energetic waves and longshore currents, the supply of suspended sediment from the rivers, and the erodible materials produced by local mass wasting on pit walls are three main types of infilling sediments. Compared with mud-capped dredge pits, this sandy dredge pit seems to have more gentle slopes on pit walls, which might be controlled by the angle of repose. Infilling sediment seems to be dominantly sandy, with some mud patches on bathymetric depressions. This study helps us better understand the impacts of mining sediment for coastal restoration and improves sand resource management efforts.
3. Determination of volatile, toxic hydrogen phosphides in the sediments of the Elbe river, the Elbe estuaries and the Heligoland Bay
International Nuclear Information System (INIS)
Gassmann, G.
1992-01-01
The distribution and concentraion of phosphines in the sediments of the Elbe river were determined by selective preparation and analysis. The concentration of phosphines in one kilogram wet sediment was in the range of 0.1 to 57 n g with the bulking, anaerobic mud from harbors having the highest and the sandy, aerobic sediments having the lowest concentrations. Phosphines in fluvial sediments were detected successfully for the first time applying the method described. (orig.) [de
4. A Quality Control study of the distribution of NOAA MIRS Cloudy retrievals during Hurricane Sandy
Science.gov (United States)
Fletcher, S. J.
2013-12-01
Cloudy radiance present a difficult challenge to data assimilation (DA) systems, through both the radiative transfer system as well the hydrometers required to resolve the cloud and precipitation. In most DA systems the hydrometers are not control variables due to many limitations. The National Oceanic and Atmospheric Administration's (NOAA) Microwave Integrated Retrieval System (MIRS) is producing products from the NPP-ATMS satellite where the scene is cloud and precipitation affected. The test case that we present here is the life time of Hurricane and then Superstorm Sandy in October 2012. As a quality control study we shall compare the retrieved water vapor content during the lifetime of Sandy with the first guess and the analysis from the NOAA Gridpoint Statistical Interpolation (GSI) system. The assessment involves the gross error check system against the first guess with different values for the observational error's variance to see if the difference is within three standard deviations. We shall also compare against the final analysis at the relevant cycles to see if the products which have been retrieved through a cloudy radiance are similar, given that the DA system does not assimilate cloudy radiances yet.
5. Spatial variability of hydraulic conductivity of an unconfined sandy aquifer determined by a mini slug test
DEFF Research Database (Denmark)
Bjerg, Poul Løgstrup; Hinsby, Klaus; Christensen, Thomas Højlund
1992-01-01
The spatial variability of the hydraulic conductivity in a sandy aquifer has been determined by a mini slug test method. The hydraulic conductivity (K) of the aquifer has a geometric mean of 5.05 × 10−4 m s−1, and an overall variance of 1n K equal to 0.37 which corresponds quite well to the results...... obtained by two large scale tracer experiments performed in the aquifer. A geological model of the aquifer based on 31 sediment cores, proposed three hydrogeological layers in the aquifer concurrent with the vertical variations observed with respect to hydraulic conductivity. The horizontal correlation......, to be in the range of 0.3–0.5 m compared with a value of 0.42 m obtained in one of the tracer tests performed....
6. Quantifying tidally driven benthic oxygen exchange across permeable sediments
DEFF Research Database (Denmark)
McGinnis, Daniel F.; Sommer, Stefan; Lorke, Andreas
2014-01-01
Continental shelves are predominately (approximate to 70%) covered with permeable, sandy sediments. While identified as critical sites for intense oxygen, carbon, and nutrient turnover, constituent exchange across permeable sediments remains poorly quantified. The central North Sea largely consists...... of permeable sediments and has been identified as increasingly at risk for developing hypoxia. Therefore, we investigate the benthic O-2 exchange across the permeable North Sea sediments using a combination of in situ microprofiles, a benthic chamber, and aquatic eddy correlation. Tidal bottom currents drive...... the variable sediment O-2 penetration depth (from approximate to 3 to 8 mm) and the concurrent turbulence-driven 25-fold variation in the benthic sediment O-2 uptake. The O-2 flux and variability were reproduced using a simple 1-D model linking the benthic turbulence to the sediment pore water exchange...
7. Benthic faunal sampling adjacent to the Sand Island ocean outfall, Oahu, Hawaii, 1986-2010 (NODC Accession 9900088)
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — Benthic fauna in the vicinity of the Sand Island ocean outfall were sampled from 1986-2010. To assess the environmental quality, sediment grain size and sediment...
8. Optical dating of young tidal sediments in the Danish Wadden Sea
DEFF Research Database (Denmark)
Madsen, Anni Tindahl; Murray, A. S.; Andersen, Thorbjørn Joest
2007-01-01
reliable and reproducible results in cores from sub-, inter- and supra-tidal sediments, ranging from only a few years up to ~1000 years old, confirming its value in the estimation of estuarine accretion rates. With OSL it is, for the first time, possible to date sediment cores from silty and sandy tidal...... flats, providing a new approach to the problem of evaluation of stability and calculation of sediment budgets for estuaries and coastal lagoons....
9. Environmental consequences of the flooding of the Bay Park Sewage Treatment Plant during Superstorm Sandy.
Science.gov (United States)
Swanson, R Lawrence; Wilson, Robert; Brownawell, Bruce; Willig, Kaitlin
2017-08-15
Failure of the Bay Park Sewage Treatment Plant (STP) during Superstorm Sandy led to adverse effects in the waters of Hempstead Bay, Long Island, NY. These appear to be related to large discharges of partially treated sewage through its primary and auxiliary outfalls. Modeled dilution discharges indicate that sewage infiltrated the bay, remaining up to 10days. Water column impacts included salinity and dissolved oxygen declines, and biological oxygen demand and nitrogen concentration increases. While the STP does not appear to have released fecal coliform, there were elevated levels of enterococci within the bay for a considerable period following the storm, probably from multiple sources. The STP's reduced functioning and associated environmental impacts, even with resilience upgrades, are not conducive to removing the bay from the list of Impaired Water Bodies. The results reinforce the need to transfer the discharge from the existing outfall to the ocean. Copyright © 2017 Elsevier Ltd. All rights reserved.
10. Turbulence and sediment transport over sand dunes and ripples
Science.gov (United States)
Bennis, A.; Le Bot, S.; lafite, R.; Bonneton, P.; Ardhuin, F.
2013-12-01
Several bedforms are present near to the surfzone of natural beaches. Dunes and ripples are frequently observed. Understanding the turbulence over these forms is essential for the sediment transport. The turbulent flow and the suspended sand particles interact with each other. At the moment, the modelling strategy for turbulence is still a challenge. According to the spatial scales, some different methods to model the turbulence are employed, in particular the RANS (Reynolds Averaged Navier-Stokes) and the LES (Large Eddy Simulation). A hybrid method combining both RANS and LES is set up here. We have adapted this method, initially developed for atmospheric flow, to the oceanic flow. This new method is implemented inside the 3D hydrodynamic model, MARS 3D, which is forced by waves. LES is currently the best way to simulate turbulent flow but its higher cost prevents it from being used for large scale applications. So, here we use RANS near the bottom while LES is set elsewhere. It allows us minimize the computational cost and ensure a better accuracy of the results than with a fully RANS model. In the case of megaripples, the validation step was performed with two sets of field data (Sandy Duck'97 and Forsoms'13) but also with the data from Dune2D model which uses only RANS for turbulence. The main findings are: a) the vertical profiles of the velocity are similar throughout the data b) the turbulent kinetic energy, which was underestimated by Dune2D, is in line with the observations c) the concentration of the suspended sediment is simulated with a better accuracy than with Dune2D but this remains lower than the observations.
11. Legal Considerations for Health Care Practitioners After Superstorm Sandy.
Science.gov (United States)
Hershey, Tina Batra; Van Nostrand, Elizabeth; Sood, Rishi K; Potter, Margaret
2016-06-01
During disaster response and recovery, legal issues often arise related to the provision of health care services to affected residents. Superstorm Sandy led to the evacuation of many hospitals and other health care facilities and compromised the ability of health care practitioners to provide necessary primary care. This article highlights the challenges and legal concerns faced by health care practitioners in the aftermath of Sandy, which included limitations in scope of practice, difficulties with credentialing, lack of portability of practitioner licenses, and concerns regarding volunteer immunity and liability. Governmental and nongovernmental entities employed various strategies to address these concerns; however, legal barriers remained that posed challenges throughout the Superstorm Sandy response and recovery period. We suggest future approaches to address these legal considerations, including policies and legislation, additional waivers of law, and planning and coordination among multiple levels of governmental and nongovernmental organizations. (Disaster Med Public Health Preparedness. 2016;10:518-524).
12. Acidification of sandy grasslands - consequences for plant diversity
DEFF Research Database (Denmark)
Olsson, Pål Axel; Mårtensson, Linda-Maria; Bruun, Hans Henrik
2009-01-01
soil; a number of nationally red-listed species showed a similar pattern. Plant species diversity and number of red-listed species increased with slope. Where the topsoil had been acidified, limestone was rarely present above a depth of 30 cm. The presence of limestone restricts the availability......Questions: (1) Does soil acidification in calcareous sandy grasslands lead to loss of plant diversity? (2) What is the relationship between the soil content of lime and the plant availability of mineral nitrogen (N) and phosphorus (P) in sandy grasslands? Location: Sandy glaciofluvial deposits......). Environmental variables were recorded at each plot, and soil samples were analysed for exchangeable P and N, as well as limestone content and pH. Data were analysed with regression analysis and canonical correspondence analysis. Results: Plant species richness was highest on weakly acid to slightly alkaline...
13. Longitudinal Impact of Hurricane Sandy Exposure on Mental Health Symptoms.
Science.gov (United States)
Schwartz, Rebecca M; Gillezeau, Christina N; Liu, Bian; Lieberman-Cribbin, Wil; Taioli, Emanuela
2017-08-24
Hurricane Sandy hit the eastern coast of the United States in October 2012, causing billions of dollars in damage and acute physical and mental health problems. The long-term mental health consequences of the storm and their predictors have not been studied. New York City and Long Island residents completed questionnaires regarding their initial Hurricane Sandy exposure and mental health symptoms at baseline and 1 year later (N = 130). There were statistically significant decreases in anxiety scores (mean difference = -0.33, p Hurricane Sandy has an impact on PTSD symptoms that persists over time. Given the likelihood of more frequent and intense hurricanes due to climate change, future hurricane recovery efforts must consider the long-term effects of hurricane exposure on mental health, especially on PTSD, when providing appropriate assistance and treatment.
14. Hurricane Sandy, Disaster Preparedness, and the Recovery Model.
Science.gov (United States)
Pizzi, Michael A
2015-01-01
Hurricane Sandy was the second largest and costliest hurricane in U.S. history to affect multiple states and communities. This article describes the lived experiences of 24 occupational therapy students who lived through Hurricane Sandy using the Recovery Model to frame the research. Occupational therapy student narratives were collected and analyzed using qualitative methods and framed by the Recovery Model. Directed content and thematic analysis was performed using the 10 components of the Recovery Model. The 10 components of the Recovery Model were experienced by or had an impact on the occupational therapy students as they coped and recovered in the aftermath of the natural disaster. This study provides insight into the lived experiences and recovery perspectives of occupational therapy students who experienced Hurricane Sandy. Further research is indicated in applying the Recovery Model to people who survive disasters. Copyright © 2015 by the American Occupational Therapy Association, Inc.
15. Beryllium-10 in continental sediments
International Nuclear Information System (INIS)
Brown, L.; Sacks, I.S.; Tera, F.; Klein, J.; Middleton, R.
1981-01-01
The concentration of 10 Be has been measured in 10 samples taken from a transect of surface sediments beginning in the Atchafalaya River and extending across the Bay 136 km into the Gulf of Mexico. If corrected for a lower retentivity of sand for Be, they have a concentration that is constant within 13%. This concentration is about an order of magnitude smaller than that of deep ocean sediments. For comparison, measurements of 10 Be in rainwater, in a sample of soil and in a deep ocean core were made. (orig.)
16. Oceanic ferromanganese deposits: Future resources and past-ocean recorders
Digital Repository Service at National Institute of Oceanography (India)
Banakar, V.K.; Nair, R.R.; Parthiban, G.; Pattan, J.N.
decades following the Mero's publication witnessed global "Nodule Rush". The technological leaders of those years like US, Germany, Japan, France, New-Zealand, and USSR have conducted major scientific expeditions to the Central Pacific to map...-Mn-(Cu+Ni+Co) in ferromanganese deposits from the Central Indian Ocean (Source: Jauhari, 1987). OCEANIC FERROMANGANESE DEPOSITS 45 DISTRIBUTION The nodules occur invariably in almost all the deep-sea basins witnessing low sedimentation rates. But abundant ore grade deposits...
17. Family Structures, Relationships, and Housing Recovery Decisions after Hurricane Sandy
Directory of Open Access Journals (Sweden)
Ali Nejat
2016-04-01
Full Text Available Understanding of the recovery phase of a disaster cycle is still in its infancy. Recent major disasters such as Hurricane Sandy have revealed the inability of existing policies and planning to promptly restore infrastructure, residential properties, and commercial activities in affected communities. In this setting, a thorough grasp of housing recovery decisions can lead to effective post-disaster planning by policyholders and public officials. The objective of this research is to integrate vignette and survey design to study how family bonds affected rebuilding/relocating decisions after Hurricane Sandy. Multinomial logistic regression was used to investigate respondents’ family structures before Sandy and explore whether their relationships with family members changed after Sandy. The study also explores the effect of the aforementioned relationship and its changes on households’ plans to either rebuild/repair their homes or relocate. These results were compared to another multinomial logistic regression which was applied to examine the impact of familial bonds on respondents’ suggestions to a vignette family concerning rebuilding and relocating after a hurricane similar to Sandy. Results indicate that respondents who lived with family members before Sandy were less likely to plan for relocating than those who lived alone. A more detailed examination shows that this effect was driven by those who improved their relationships with family members; those who did not improve their family relationships were not significantly different from those who lived alone, when it came to rebuilding/relocation planning. Those who improved their relationships with family members were also less likely to suggest that the vignette family relocate. This study supports the general hypothesis that family bonds reduce the desire to relocate, and provides empirical evidence that family mechanisms are important for the rebuilding/relocating decision
18. Nearshore sediment thickness, Fire Island, New York
Science.gov (United States)
Locker, Stanley D.; Miselis, Jennifer L.; Buster, Noreen A.; Hapke, Cheryl J.; Wadman, Heidi M.; McNinch, Jesse E.; Forde, Arnell S.; Stalk, Chelsea A.
2017-04-03
Investigations of coastal change at Fire Island, New York (N.Y.), sought to characterize sediment budgets and determine geologic framework controls on coastal processes. Nearshore sediment thickness is critical for assessing coastal system sediment availability, but it is largely unquantified due to the difficulty of conducting geological or geophysical surveys across the nearshore. This study used an amphibious vessel to acquire chirp subbottom profiles. These profiles were used to characterize nearshore geology and provide an assessment of nearshore sediment volume. Two resulting sediment-thickness maps are provided: total Holocene sediment thickness and the thickness of the active shoreface. The Holocene sediment section represents deposition above the maximum flooding surface that is related to the most recent marine transgression. The active shoreface section is the uppermost Holocene sediment, which is interpreted to represent the portion of the shoreface thought to contribute to present and future coastal behavior. The sediment distribution patterns correspond to previously defined zones of erosion, accretion, and stability along the island, demonstrating the importance of sediment availability in the coastal response to storms and seasonal variability. The eastern zone has a thin nearshore sediment thickness, except for an ebb-tidal deposit at the wilderness breach caused by Hurricane Sandy. Thicker sediment is found along a central zone that includes shoreface-attached sand ridges, which is consistent with a stable or accretional coastline in this area. The thickest overall Holocene section is found in the western zone of the study, where a thicker lower section of Holocene sediment appears related to the westward migration of Fire Island Inlet over several hundred years.
19. Scripps Sediment Description File- OCSEAP Portion
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — The Scripps Institution of Oceanography compiled descriptions of sediment samples in the Alaskan Outer Continental Shelf area, funded through the NOAA Outer...
20. Delaware River and Upper Bay Sediment Data
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — The area of coverage consists of 192 square miles of benthic habitat mapped from 2005 to 2007 in the Delaware River and Upper Delaware Bay. The bottom sediment map...
1. The NGDC Seafloor Sediment Grain Size Database
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — The NGDC (now NCEI) Seafloor Sediment Grain Size Database contains particle size data for over 17,000 seafloor samples worldwide. The file was begun by NGDC in 1976...
2. Deck41 Surficial Seafloor Sediment Description Database
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — Deck41 is a digital summary of surficial sediment composition for 36,401 seafloor samples worldwide. Data include collecting source, ship, cruise, sample id,...
3. The NGDC Seafloor Sediment Geotechnical Database
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — The NGDC Seafloor Sediment Geotechnical Properties Database contains test engineering properties data coded by students at NGDC from primarily U.S. Naval...
4. A numerical model investigation of the impacts of Hurricane Sandy on water level variability in Great South Bay, New York
Science.gov (United States)
Bennett, Vanessa C. C.; Mulligan, Ryan P.; Hapke, Cheryl J.
2018-01-01
Hurricane Sandy was a large and intense storm with high winds that caused total water levels from combined tides and storm surge to reach 4.0 m in the Atlantic Ocean and 2.5 m in Great South Bay (GSB), a back-barrier bay between Fire Island and Long Island, New York. In this study the impact of the hurricane winds and waves are examined in order to understand the flow of ocean water into the back-barrier bay and water level variations within the bay. To accomplish this goal, a high resolution hurricane wind field is used to drive the coupled Delft3D-SWAN hydrodynamic and wave models over a series of grids with the finest resolution in GSB. The processes that control water levels in the back-barrier bay are investigated by comparing the results of four cases that include: (i) tides only; (ii) tides, winds and waves with no overwash over Fire Island allowed; (iii) tides, winds, waves and limited overwash at the east end of the island; (iv) tides, winds, waves and extensive overwash along the island. The results indicate that strong local wind-driven storm surge along the bay axis had the largest influence on the total water level fluctuations during the hurricane. However, the simulations allowing for overwash have higher correlation with water level observations in GSB and suggest that island overwash provided a significant contribution of ocean water to eastern GSB during the storm. The computations indicate that overwash of 7500–10,000 m3s−1 was approximately the same as the inflow from the ocean through the major existing inlet. Overall, the model results indicate the complex variability in total water levels driven by tides, ocean storm surge, surge from local winds, and overwash that had a significant impact on the circulation in Great South Bay during Hurricane Sandy.
5. A numerical model investigation of the impacts of Hurricane Sandy on water level variability in Great South Bay, New York
Science.gov (United States)
Bennett, Vanessa C. C.; Mulligan, Ryan P.; Hapke, Cheryl J.
2018-06-01
Hurricane Sandy was a large and intense storm with high winds that caused total water levels from combined tides and storm surge to reach 4.0 m in the Atlantic Ocean and 2.5 m in Great South Bay (GSB), a back-barrier bay between Fire Island and Long Island, New York. In this study the impact of the hurricane winds and waves are examined in order to understand the flow of ocean water into the back-barrier bay and water level variations within the bay. To accomplish this goal, a high resolution hurricane wind field is used to drive the coupled Delft3D-SWAN hydrodynamic and wave models over a series of grids with the finest resolution in GSB. The processes that control water levels in the back-barrier bay are investigated by comparing the results of four cases that include: (i) tides only; (ii) tides, winds and waves with no overwash over Fire Island allowed; (iii) tides, winds, waves and limited overwash at the east end of the island; (iv) tides, winds, waves and extensive overwash along the island. The results indicate that strong local wind-driven storm surge along the bay axis had the largest influence on the total water level fluctuations during the hurricane. However, the simulations allowing for overwash have higher correlation with water level observations in GSB and suggest that island overwash provided a significant contribution of ocean water to eastern GSB during the storm. The computations indicate that overwash of 7500-10,000 m3s-1 was approximately the same as the inflow from the ocean through the major existing inlet. Overall, the model results indicate the complex variability in total water levels driven by tides, ocean storm surge, surge from local winds, and overwash that had a significant impact on the circulation in Great South Bay during Hurricane Sandy.
6. Ocean Acidification | Smithsonian Ocean Portal
Science.gov (United States)
Natural History Blog For Educators At The Museum Media Archive Ocean Life & Ecosystems Mammals Sharks Mangroves Poles Census of Marine Life Planet Ocean Tides & Currents Waves & Storms The Seafloor ocean is affected. Such a relatively quick change in ocean chemistry doesn't give marine life, which
7. Remote Sensing of Ocean Color
Science.gov (United States)
The oceans cover over 70% of the earth's surface and the life inhabiting the oceans play an important role in shaping the earth's climate. Phytoplankton, the microscopic organisms in the surface ocean, are responsible for half of the photosynthesis on the planet. These organisms at the base of the food web take up light and carbon dioxide and fix carbon into biological structures releasing oxygen. Estimating the amount of microscopic phytoplankton and their associated primary productivity over the vast expanses of the ocean is extremely challenging from ships. However, as phytoplankton take up light for photosynthesis, they change the color of the surface ocean from blue to green. Such shifts in ocean color can be measured from sensors placed high above the sea on satellites or aircraft and is called "ocean color remote sensing." In open ocean waters, the ocean color is predominantly driven by the phytoplankton concentration and ocean color remote sensing has been used to estimate the amount of chlorophyll a, the primary light-absorbing pigment in all phytoplankton. For the last few decades, satellite data have been used to estimate large-scale patterns of chlorophyll and to model primary productivity across the global ocean from daily to interannual timescales. Such global estimates of chlorophyll and primary productivity have been integrated into climate models and illustrate the important feedbacks between ocean life and global climate processes. In coastal and estuarine systems, ocean color is significantly influenced by other light-absorbing and light-scattering components besides phytoplankton. New approaches have been developed to evaluate the ocean color in relationship to colored dissolved organic matter, suspended sediments, and even to characterize the bathymetry and composition of the seafloor in optically shallow waters. Ocean color measurements are increasingly being used for environmental monitoring of harmful algal blooms, critical coastal habitats
8. Patterns of species richness in sandy beaches of South America ...
African Journals Online (AJOL)
The middle shore is primarily occupied by cirolanids and bivalves, and hippid crabs, bivalves and amphipods dominate the lower beach. Generally, species richness increases from upper to lower beach levels. Studies carried out on exposed sandy beaches of south-central Chile (ca. 40°S) show that different beach states ...
9. Hurricane Sandy: An Educational Bibliography of Key Research Studies
Science.gov (United States)
Piotrowski, Chris
2013-01-01
There, undoubtedly, will be a flurry of research activity in the "Superstorm" Sandy impact area on a myriad of disaster-related topics, across academic disciplines. The purpose of this study was to review the disaster research related specifically to hurricanes in the educational and social sciences that would best serve as a compendium…
10. Structural stability and hydraulic conductivity of Nkpologu sandy ...
African Journals Online (AJOL)
vincent
mean weight diameter (MWD), water dispersible silt (WDSi), aggregate size distributions (> 2 mm, 1-0.5 mm and < 0.25 ... above sea level. ... and red to brownish red and derived from sandy ... where Q = steady state volume of outflow from the.
11. Aggregations of the sandy-beach isopod, Tylos granulatus ...
African Journals Online (AJOL)
... lives as a scavenger in the intertidal zone of sandy beaches on the west coast of South Africa. Individuals emerge with the receding tide leaving exit holes, then forage for about two hours before returning to the vicinity of the high-water mark where they aggregate to bury themselves, leaving behind cone-shaped mounds.
12. effect of tractor forward speed on sandy loam soil physical ...
African Journals Online (AJOL)
Dr Obe
Ilorin on a sandy loam soil to evaluate the effect of the imposition of different .... of the blade is 10.5cm. ... arranged in an inverted cone shape with ... replicates were taken for each speed run. The ..... Thakur, T. C; A. Yadav; B. P. Varshney and.
13. Structural Stability and Hydraulic Conductivity Of Nkpologu Sandy ...
African Journals Online (AJOL)
Studies were conducted in the runoff plots at the University of Nigeria Nsukka Teaching and Resesarch Farm in 2010 and 2011 to monitor the changes in structural stability and saturated hydraulic conductivity (Ksat) of Nkpologu sandy loam soil under different cover management practices. The management practices were ...
14. Rapid Assessment of Anthropogenic Impacts of Exposed Sandy ...
African Journals Online (AJOL)
We applied a rapid assessment methodology to estimate the degree of human impact of exposed sandy beaches in Ghana using ghost crabs as ecological indicators. The use of size ranges of ghost crab burrows and their population density as ecological indicators to assess extent of anthropogenic impacts on beaches ...
15. Organisms associated with the sandy-beach bivalve Donax serra ...
African Journals Online (AJOL)
57: 134-136. BROWN, AC. & WEBB, S.c. 1994. Organisms associated \\.,.,ith burrowing whelks of the genus Bullia. S Afr. 1. Zool. 29: 144-151. BROWN, A.C., STENTON-DOZEY, J.~.E. & TRUEMAN, E.R.. 1989. Sandy-beach bivalves and gastropods; a comparison between Donax serra and Ruilia digitalis. Adv. mar. Bioi. 25:.
16. Deaths associated with Hurricane Sandy - October-November 2012.
Science.gov (United States)
2013-05-24
On October 29, 2012, Hurricane Sandy hit the northeastern U.S. coastline. Sandy's tropical storm winds stretched over 900 miles (1,440 km), causing storm surges and destruction over a larger area than that affected by hurricanes with more intensity but narrower paths. Based on storm surge predictions, mandatory evacuations were ordered on October 28, including for New York City's Evacuation Zone A, the coastal zone at risk for flooding from any hurricane. By October 31, the region had 6-12 inches (15-30 cm) of precipitation, 7-8 million customers without power, approximately 20,000 persons in shelters, and news reports of numerous fatalities (Robert Neurath, CDC, personal communication, 2013). To characterize deaths related to Sandy, CDC analyzed data on 117 hurricane-related deaths captured by American Red Cross (Red Cross) mortality tracking during October 28-November 30, 2012. This report describes the results of that analysis, which found drowning was the most common cause of death related to Sandy, and 45% of drowning deaths occurred in flooded homes in Evacuation Zone A. Drowning is a leading cause of hurricane death but is preventable with advance warning systems and evacuation plans. Emergency plans should ensure that persons receive and comprehend evacuation messages and have the necessary resources to comply with them.
17. The Sandy Hook Elementary School shooting as tipping point
Science.gov (United States)
Shultz, James M; Muschert, Glenn W; Dingwall, Alison; Cohen, Alyssa M
2013-01-01
Among rampage shooting massacres, the Sandy Hook Elementary School shooting on December 14, 2012 galvanized public attention. In this Commentary we examine the features of this episode of gun violence that has sparked strong reactions and energized discourse that may ultimately lead toward constructive solutions to diminish high rates of firearm deaths and injuries in the United States. PMID:28228989
18. Copper and zinc distribution coefficients for sandy aquifer materials
DEFF Research Database (Denmark)
Christensen, Thomas Højlund; Astrup, Thomas; Boddum, J. K.
2000-01-01
Distribution coe�cients (Kd) were measured for copper (Cu) and zinc (Zn) in laboratory batch experiments for 17 sandy aquifer materials at environmentally relevant solute concentrations (Cu: 5±300 mg/l, Zn: 20±3100 mg/l). The Kd values ranged two to three orders of magnitude (Cu: 70±10,800 l/ kg...
19. The sandy beach meiofauna and free-living nematodes from De Panne (Belgium)
OpenAIRE
Gheskiere, T.; Hoste, E.; Kotwicki, L.; Degraer, S.; Vanaverbeke, J.; Vincx, M.
2002-01-01
Despite their rather barren and arid appearance, European sandy beaches harbour a highly diverse fauna and flora and some of them are even highly productive. In contrast to tropical sandy beaches little is known about the structural and functional diversity of the different benthic components. This study aims to investigate the structural diversity of the meiobenthos, emphasizing on free-living marine nematodes on a Belgian sandy beach.The samples were collected on the sandy beach of De Panne...
20. Effects of soil amendment on soil characteristics and maize yield in Horqin Sandy Land
Science.gov (United States)
Zhou, L.; Liu, J. H.; Zhao, B. P.; Xue, A.; Hao, G. C.
2016-08-01
A 4-year experiment was conducted to investigate the inter-annual effects of sandy soil amendment on maize yield, soil water storage and soil enzymatic activities in sandy soil in Northeast China in 2010 to 2014. We applied the sandy soil amendment in different year, and investigated the different effects of sandy soil amendment in 2014. There were six treatments including: (1) no sandy soil amendment application (CK); (2) one year after applying sandy soil amendment (T1); (3) two years after applying sandy soil amendment(T2); (4) three years after applying sandy soil amendment(T3); (5)four years after applying sandy soil amendment(T4); (6) five years after applying sandy soil amendment (T5). T refers to treatment, and the number refers to the year after application of the sandy soil amendment. Comparing with CK, sandy soil amendments improved the soil water storage, soil urease, invertase, and catalase activity in different growth stages and soil layers, the order of soil water storage in all treatments roughly performed: T3 > T5 > T4 > T2 > T1 > CK. the order of soil urease, invertase, and catalase activity in all treatments roughly performed: T5 > T3 > T4 > T2 > T1 > CK. Soil application of sandy soil amendment significantly (p≤⃒0.05) increased the grain yield and biomass yield by 22.75%-41.42% and 29.92%-45.45% respectively, and maize yield gradually increased with the years go by in the following five years. Sandy soil amendment used in poor sandy soil had a positive effect on soil water storage, soil enzymatic activities and maize yield, after five years applied sandy soil amendment (T5) showed the best effects among all the treatments, and deserves further research.
1. Nodules of the Central Indian Ocean Basin
Digital Repository Service at National Institute of Oceanography (India)
Banakar, V.K.; Kodagali, V.N.
of calcareous sediments within, and pelagic sediments south of 15 degrees S latitude Prior to the launching of the project, very little data was available on the Indian Ocean nodules compared to those of Pacific This chapter summaries the findings of the project...
2. Ocean tides
Science.gov (United States)
Hendershott, M. C.
1975-01-01
A review of recent developments in the study of ocean tides and related phenomena is presented. Topics briefly discussed include: the mechanism by which tidal dissipation occurs; continental shelf, marginal sea, and baroclinic tides; estimation of the amount of energy stored in the tide; the distribution of energy over the ocean; the resonant frequencies and Q factors of oceanic normal modes; the relationship of earth tides and ocean tides; and numerical global tidal models.
3. Closure Time of the Junggar-Balkhash Ocean: Constraints From Late Paleozoic Volcano-Sedimentary Sequences in the Barleik Mountains, West Junggar, NW China
Science.gov (United States)
Liu, Bo; Han, Bao-Fu; Chen, Jia-Fu; Ren, Rong; Zheng, Bo; Wang, Zeng-Zhen; Feng, Li-Xia
2017-12-01
The Junggar-Balkhash Ocean was a major branch of the southern Paleo-Asian Ocean. The timing of its closure is important for understanding the history of the Central Asian Orogenic Belt. New sedimentological and geochronological data from the Late Paleozoic volcano-sedimentary sequences in the Barleik Mountains of West Junggar, NW China, help to constrain the closure time of the Junggar-Balkhash Ocean. Tielieketi Formation (Fm) is dominated by littoral sediments, but its upper glauconite-bearing sandstone is interpreted to deposit rapidly in a shallow-water shelf setting. By contrast, Heishantou Fm consists chiefly of volcanic rocks, conformably overlying or in fault contact with Tielieketi Fm. Molaoba Fm is composed of parallel-stratified fine sandstone and sandy conglomerate with graded bedding, typical of nonmarine, fluvial deposition. This formation unconformably overlies the Tielieketi and Heishantou formations and is conformably covered by Kalagang Fm characterized by a continental bimodal volcanic association. The youngest U-Pb ages of detrital zircons from sandstones and zircon U-Pb ages from volcanic rocks suggest that the Tielieketi, Heishantou, Molaoba, and Kalagang formations were deposited during the Famennian-Tournaisian, Tournaisian-early Bashkirian, Gzhelian, and Asselian-Sakmarian, respectively. The absence of upper Bashkirian to Kasimovian was likely caused by tectonic uplifting of the West Junggar terrane. This is compatible with the occurrence of coeval stitching plutons in the West Junggar and adjacent areas. The Junggar-Balkhash Ocean should be finally closed before the Gzhelian, slightly later or concurrent with that of other ocean domains of the southern Paleo-Asian Ocean.
4. Barrier-island and estuarine-wetland physical-change assessment after Hurricane Sandy
Science.gov (United States)
Plant, Nathaniel G.; Smith, Kathryn E.L.; Passeri, Davina L.; Smith, Christopher G.; Bernier, Julie C.
2018-04-03
IntroductionThe Nation’s eastern coast is fringed by beaches, dunes, barrier islands, wetlands, and bluffs. These natural coastal barriers provide critical benefits and services, and can mitigate the impact of storms, erosion, and sea-level rise on our coastal communities. Waves and storm surge resulting from Hurricane Sandy, which made landfall along the New Jersey coast on October 29, 2012, impacted the U.S. coastline from North Carolina to Massachusetts, including Assateague Island, Maryland and Virginia, and the Delmarva coastal system. The storm impacts included changes in topography, coastal morphology, geology, hydrology, environmental quality, and ecosystems.In the immediate aftermath of the storm, light detection and ranging (lidar) surveys from North Carolina to New York documented storm impacts to coastal barriers, providing a baseline to assess vulnerability of the reconfigured coast. The focus of much of the existing coastal change assessment is along the ocean-facing coastline; however, much of the coastline affected by Hurricane Sandy includes the estuarine-facing coastlines of barrier-island systems. Specifically, the wetland and back-barrier shorelines experienced substantial change as a result of wave action and storm surge that occurred during Hurricane Sandy (see also USGS photograph, http://coastal.er.usgs.gov/hurricanes/sandy/photo-comparisons/virginia.php). Assessing physical shoreline and wetland change (land loss as well as land gains) can help to determine the resiliency of wetland systems that protect adjacent habitat, shorelines, and communities.To address storm impacts to wetlands, a vulnerability assessment should describe both long-term (for example, several decades) and short-term (for example, Sandy’s landfall) extent and character of the interior wetlands and the back-barrier-shoreline changes. The objective of this report is to describe several new wetland vulnerability assessments based on the detailed physical changes
5. Documentation and hydrologic analysis of Hurricane Sandy in New Jersey, October 29–30, 2012
Science.gov (United States)
Suro, Thomas P.; Deetz, Anna; Hearn, Paul
2016-11-17
In 2012, a late season tropical depression developed into a tropical storm and later a hurricane. The hurricane, named “Hurricane Sandy,” gained strength to a Category 3 storm on October 25, 2012, and underwent several transitions on its approach to the mid-Atlantic region of the eastern coast of the United States. By October 28, 2012, Hurricane Sandy had strengthened into the largest hurricane ever recorded in the North Atlantic and was tracking parallel to the east coast of United States, heading toward New Jersey. On October 29, 2012, the storm turned west-northwest and made landfall near Atlantic City, N.J. The high winds and wind-driven storm surge caused massive damage along the entire coastline of New Jersey. Millions of people were left without power or communication networks. Many homes were completely destroyed. Sand dunes were eroded, and the barrier island at Mantoloking was breached, connecting the ocean with Barnegat Bay.Several days before the storm made landfall in New Jersey, the U.S. Geological Survey (USGS) made a decision to deploy a temporary network of storm-tide sensors and barometric pressure sensors from Virginia to Maine to supplement the existing USGS and National Oceanic and Atmospheric Administration (NOAA) networks of permanent tide monitoring stations. After the storm made landfall, the USGS conducted a sensor data recovery and high-water-mark collection campaign in cooperation with the Federal Emergency Management Agency (FEMA).Peak storm-tide elevations documented at USGS tide gages, tidal crest-stage gages, temporary storm sensor locations, and high-water-mark sites indicate the area from southern Monmouth County, N.J., north through Raritan Bay, N.J., had the highest peak storm-tide elevations during this storm. The USGS tide gages at Raritan River at South Amboy and Raritan Bay at Keansburg, part of the New Jersey Tide Telemetry System, each recorded peak storm-tide elevations of greater than 13 feet (ft)—more than 5 ft
6. Let's Bet on Sediments! Hudson Canyon Cruise--Grades 9-12. Focus: Sediments of Hudson Canyon.
Science.gov (United States)
National Oceanic and Atmospheric Administration (DOC), Rockville, MD.
These activities are designed to teach about the sediments of Hudson Canyon. Students investigate and analyze the patterns of sedimentation in the Hudson Canyon, observe how heavier particles sink faster than finer particles, and learn that submarine landslides are avalanches of sediment in deep ocean canyons. The activity provides learning…
7. The effect of particle size on sorption of estrogens, androgens and progestagens in aquatic sediment
International Nuclear Information System (INIS)
Sangster, Jodi L.; Oke, Hugues; Zhang, Yun; Bartelt-Hunt, Shannon L.
2015-01-01
Highlights: • Two sediments were used to evaluate the effects of particle size on steroid sorption. • Sorption capacity did not increase with decreasing particle size for all steroids. • Particle interactions affect the distribution of steroids within the whole sediments. • Preferential sorption to fine particles was observed. - Abstract: There is growing concern about the biologic effects of steroid hormones in impacted waterways. There is increasing evidence of enhanced transport and biological effects stemming from steroid hormones associated with soils or sediments; however, there are limited studies evaluating how steroid hormone distribution between various particle sizes within whole sediments affects steroid fate. In this study, sorption of 17β-estradiol, estrone, progesterone, and testosterone was evaluated to different size fractions of two natural sediments, a silty loam and a sandy sediment, to determine the steroid sorption capacity to each fraction and distribution within the whole sediment. Sorption isotherms for all steroid hormones fit linear sorption models. Sorption capacity was influenced more by organic carbon content than particle size. Interactions between size fractions were found to affect the distribution of steroids within the whole sediments. All four steroids preferentially sorbed to the clay and colloids in the silty loam sediment at the lowest aqueous concentration (1 ng/L) and as aqueous concentration increased, the distribution of sorbed steroid was similar to the distribution by weight of each size fraction within the whole sediment. In the sandy sediment, preferential sorption to fine particles was observed.
8. The effect of particle size on sorption of estrogens, androgens and progestagens in aquatic sediment
Energy Technology Data Exchange (ETDEWEB)
Sangster, Jodi L.; Oke, Hugues; Zhang, Yun; Bartelt-Hunt, Shannon L., E-mail: [email protected]
2015-12-15
Highlights: • Two sediments were used to evaluate the effects of particle size on steroid sorption. • Sorption capacity did not increase with decreasing particle size for all steroids. • Particle interactions affect the distribution of steroids within the whole sediments. • Preferential sorption to fine particles was observed. - Abstract: There is growing concern about the biologic effects of steroid hormones in impacted waterways. There is increasing evidence of enhanced transport and biological effects stemming from steroid hormones associated with soils or sediments; however, there are limited studies evaluating how steroid hormone distribution between various particle sizes within whole sediments affects steroid fate. In this study, sorption of 17β-estradiol, estrone, progesterone, and testosterone was evaluated to different size fractions of two natural sediments, a silty loam and a sandy sediment, to determine the steroid sorption capacity to each fraction and distribution within the whole sediment. Sorption isotherms for all steroid hormones fit linear sorption models. Sorption capacity was influenced more by organic carbon content than particle size. Interactions between size fractions were found to affect the distribution of steroids within the whole sediments. All four steroids preferentially sorbed to the clay and colloids in the silty loam sediment at the lowest aqueous concentration (1 ng/L) and as aqueous concentration increased, the distribution of sorbed steroid was similar to the distribution by weight of each size fraction within the whole sediment. In the sandy sediment, preferential sorption to fine particles was observed.
9. Cross-shore profile and coastline changes of a sandy beach in Pieria, Greece, based on measurements and numerical simulation
Directory of Open Access Journals (Sweden)
A.M. PROSPATHOPOULOS
2004-06-01
Full Text Available In the present work, the changes of cross-shore profile and the coastline of a sandy beach in Pieria, Greece, are studied by using topographic profiles, sediment analysis and a numerical simulation model. The work is motivated by the considerable erosion problems caused to an extended portion of the coast north of the studied area due to the construction of a craft shelter, and its scope is two-fold: to help in understanding the dynamics of the beach based on results of the field work and to proceed a step further, studying the responses of this beach by numerical simulation, utilizing the topographic and sediment field data and measured wave data. The study of the cross-shore profiles, as well as the sediment analysis of the samples obtained along the profiles, revealed the morphological features of the coast under study and provided information concerning the dynamic zones in each profile. The sediment grain size reduces from south to north, following the direction of the longshore currents generated in the area. The results of the numerical simulation concerning the coastline evolution are found to be in agreement with the qualitative estimations and visual observations of existing coastal changes to the broader area.
10. Sorption and Migration Mechanisms of 237 Np through Sandy Soil
International Nuclear Information System (INIS)
2003-06-01
In order to evaluate migration behavior of radioactive nuclides in the disposal of low-level radioactive waste into a shallow land burial, the sorption characteristic and migration behavior of 237 Np through sandy soil was studied. Two experimental methods were performed by using batch and column systems. The distribution coefficients (K d ) obtained from the adsorption and desorption process are rather small about 16 and 21 cm 3 /g respectively. Size distribution of 237 Np species in the influent solution was measured by ultra-filtration technique. Migration mechanism of 237 Np was studied by column experiments. The experimental condition was the influence of volume of eluting solution; 100, 300, 500, 1000 and 2000 ml respectively. The result from five column experiments confirm that the sorption characteristics of 237 Np are mainly controlled by a reversible ion-exchange reaction and the migration of 237 Np in the sandy soil can be estimated by using the K d concept
11. Superstorm Sandy and the Verdant Power RITE Project
Science.gov (United States)
Corren, D.; Colby, J.; Adonizio, M.
2013-12-01
12. Quantifying the Digital Traces of Hurricane Sandy on Flickr
Science.gov (United States)
Preis, Tobias; Moat, Helen Susannah; Bishop, Steven R.; Treleaven, Philip; Stanley, H. Eugene
2013-11-01
Society's increasing interactions with technology are creating extensive digital traces'' of our collective human behavior. These new data sources are fuelling the rapid development of the new field of computational social science. To investigate user attention to the Hurricane Sandy disaster in 2012, we analyze data from Flickr, a popular website for sharing personal photographs. In this case study, we find that the number of photos taken and subsequently uploaded to Flickr with titles, descriptions or tags related to Hurricane Sandy bears a striking correlation to the atmospheric pressure in the US state New Jersey during this period. Appropriate leverage of such information could be useful to policy makers and others charged with emergency crisis management.
13. Seafloor Mapping and Benthic Habitats off Assateague Island National Seashore: can we Resolve any Effects of Superstorm Sandy?
Science.gov (United States)
Miller, D.; Trembanis, A. C.; Kennedy, E.; Rusch, H.; Rothermel, E.
2016-02-01
The National Park Service has partnered with faculty and students at the University of Delaware to map the length of Assateague Island and sample benthic communities there for two purposes: (1) to provide a complete inventory of benthic habitats and their biota, and (2) to determine if any changes from a pre-storm survey can be ascribed to Superstorm Sandy in 2012. During the 2014 and 2015 field seasons over 75 km2 of high-resolution ( 50 cm/pixel) side-scan sonar and collocated bathymetry were collected with a surface vessel mounted bathy side-scan sonar (EdgeTech 6205), spanning the shore from depths of less than 2 m out to a distance of approximately 1 nautical mile and depths of 10-12 m. Furthermore, we have resampled using standard methodology (modified Young grab and 0.5-mm sieve) a subset of the previously sampled benthic stations that represent all sediment classes identified in prior studies. Additionally, we have obtained novel data with our ROV and AUV assets, including finer scale bottom video and multibeam bathymetry, at specifically chosen locations in order to enhance understanding of the benthic habitat and bottom type changes. In addition to providing a habitat and faunal inventory for resource management purposes, we will compare our side scan and benthic survey data to the pre-storm 2011 data products with comparable coverage. To date we have found that ArcGIS and ENVI sediment classifications agree well with those from the 2011 study, but spatially we note more areas of finer sediments and less of gravel. As was expected, 2014 benthic assemblages differ significantly among sediment classes (PRIMER ANOSIM), and sediment class is the best predictor of the benthic community (PERMANOVA+ distance-based RDA). Our goal here is to use consistent analytical approaches to characterize changes that occur over season and inter-annual time scales. This is a critical step toward attributing sediment, habitat and biological changes to Superstorm Sandy.
14. Patterns of species richness in sandy beaches of South America
African Journals Online (AJOL)
beaches with rdkctive and dissip:1tive characteristics (sensu. R eprodu ced by Sabin et G atew ay u n der licen ce gran ted by th e P u blish er (dated 2009). ... beach intertidal communities WaS reviewed, (b) location of len sam.!y beaches studied in south-central Chile, imd (c) location of two sandy beaches studied on the ...
15. Brazilian sandy beaches: characteristics, ecosystem services, impacts, knowledge and priorities
Directory of Open Access Journals (Sweden)
Antonia Cecília Zacagnini Amaral
Full Text Available ABSTRACT Sandy beaches constitute a key ecosystem and provide socioeconomic goods and services, thereby playing an important role in the maintenance of human populations and in biodiversity conservation. Despite the ecological and social importance of these ecosytems, Brazilian sandy beaches are significantly impacted by human interference, chemical and organic pollution and tourism, as well as global climate change. These factors drive the need to better understand the environmental change and its consequences for biota. To promote the implementation of integrated studies to detect the effects of regional and global environmental change on beaches and on other benthic habitats of the Brazilian coast, Brazilian marine researchers have established The Coastal Benthic Habitats Monitoring Network (ReBentos. In order to provide input for sample planning by ReBentos, we have conducted an intensive review of the studies conducted on Brazilian beaches and summarized the current knowledge about this environment. In this paper, we present the results of this review and describe the physical, biological and socioeconomics features of Brazilian beaches. We have used these results, our personal experience and worldwide literature to identify research projects that should be prioritized in the assessment of regional and global change on Brazilian sandy beaches. We trust that this paper will provide insights for future studies and represent a significant step towards the conservation of Brazilian beaches and their biodiversity.
16. Sediment Transport
DEFF Research Database (Denmark)
Liu, Zhou
Flow and sediment transport are important in relation to several engineering topics, e.g. erosion around structures, backfilling of dredged channels and nearshore morphological change. The purpose of the present book is to describe both the basic hydrodynamics and the basic sediment transport...... mechanics. Chapter 1 deals with fundamentals in fluid mechanics with emphasis on bed shear stress by currents, while chapter 3 discusses wave boundary layer theory. They are both written with a view to sediment transport. Sediment transport in rivers, cross-shore and longshore are dealt with in chapters 2......, 4 and 5, respectively. It is not the intention of the book to give a broad review of the literature on this very wide topic. The book tries to pick up information which is of engineering importance. An obstacle to the study of sedimentation is the scale effect in model tests. Whenever small...
17. Oceanic archipelagos
DEFF Research Database (Denmark)
Triantis, Kostas A.; Whittaker, Robert James; Fernández-Palacios, José María
2016-01-01
Since the contributions of Charles Darwin and Alfred Russel Wallace, oceanic archipelagos have played a central role in the development of biogeography. However, despite the critical influence of oceanic islands on ecological and evolutionary theory, our focus has remained limited to either the i...... of the archipelagic geological dynamics that can affect diversity at both the island and the archipelagic level. We also reaffirm that oceanic archipelagos are appropriate spatiotemporal units to frame analyses in order to understand large scale patterns of biodiversity....
18. Ocean transportation
National Research Council Canada - National Science Library
Frankel, Ernst G; Marcus, Henry S
1973-01-01
.... This analysis starts with a review of ocean transportation demand and supply including projections of ship capacity demand and world shipbuilding capacity under various economic and political assumptions...
19. NOAA ESRI Grid - sediment size predictions model in New York offshore planning area from Biogeography Branch
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — This dataset represents sediment size predictions from a sediment spatial model developed for the New York offshore spatial planning area. The model also includes...
20. NOAA ESRI Shapefile - sediment composition class predictions in New York offshore planning area from Biogeography Branch
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — This dataset represents sediment composition class predictions from a sediment spatial model developed for the New York offshore spatial planning area. The...
1. Sediment - size distribution of innershelf off Gopalpur, Orissa coast using EOF analysis
Digital Repository Service at National Institute of Oceanography (India)
Murty, T.V.R.; Rao, K.M.; Rao, M.M.M.; Lakshminarayana, S.; Murthy, K.S.R.
-energy conditions such as wave breaking, wind action and oscillating motion of the grains in the surfzone and thus formed a coarse sandy deposit. Transgressive sediments also might have played some role in producing bimodal nature. This bimodal and closed ended...
2. Secondary calcification of planktic foraminifera from the Indian sector of Southern ocean
Digital Repository Service at National Institute of Oceanography (India)
Mohan, R.; Shetye, S.; Tiwari, M.; AnilKumar, N.
This study focused on planktic foraminifera in plankton tows and surface sediments from the western Indian sector of Southern Ocean in order to evaluate the potential foraminiferal secondary calcification and/or dissolution in the sediment...
3. Aeolian sediment mass fluxes on a sandy soil in Central Patagonia
NARCIS (Netherlands)
Sterk, G.; Parigiani, J.; Cittadini, E.; Peters, P.; Scholberg, J.; Peri, P.
2012-01-01
The climate of Patagonia is semi-arid and characterised by frequent strong winds. Wind erosion is potentially a serious soil degradation process that impacts long-term sustainability of local agricultural systems, but the conditions and the rates of wind erosion in this region have not been
4. Desertification triggered by hydrological and geomorphological processes and palaeoclimatic changes in the Hunshandake Sandy Lands, Inner Mongolia, northern China
Science.gov (United States)
Yang, X.; Scuderi, L. A.; Wang, X.; Zhang, D.; Li, H.; Forman, S. L.
2015-12-01
Although Pleistocene and earlier aeolian sediments in the adjacent regions of deserts were used as indicators for the occurrence of the deserts in northern China, our multidisciplinary investigation in the Hunshandake Sandy Lands, Inner Mongolia, a typical landscape in the eastern portion of the Asian mid-latitude desert belt, demonstrates that this sandy desert is just ca. 4000 years old. Before the formation of the current sand dunes, Hunshandke was characterized with large and deep lakes and grasssland vegetation, as many sedimentary sections indicate. Optically Stimulated Luminescence (OSL) chronology shows that the three large former lakes where we have done detailed investigation, experienced high stands from early Holocene to ca. 5 ka. During the early and middle Holocene this desert was a temperate steppe environment, dominated by grasslands and trees near lakes and streams, as various palaeoenvironmental proxies suggest. While North Hemisphere's monsoonal regions experienced catastrophic precipitation decreases at ca. 4.2 ka, many parts of the presently arid and semi-arid zone in northern China were shifted from Green to Desert state. In the eastern portion of the Hunshandake, the desertification was, however, directly associated with groundwater capture by the Xilamulun River, as the palaeo-drainage remains show. The process of groundwater sapping initiated a sudden and irreversible region-wide hydrologic event that lowered the groundwater table and exacerbated the desertification of the Hunshandake, and further resulting in post-Humid period mass migration of northern China's Hongshan culture from that we think the modern Chinese civilization has been rooted.
5. Ocean technology
Digital Repository Service at National Institute of Oceanography (India)
Peshwe, V.B.
stream_size 2 stream_content_type text/plain stream_name Voices_Oceans_1996_113.pdf.txt stream_source_info Voices_Oceans_1996_113.pdf.txt Content-Encoding ISO-8859-1 Content-Type text/plain; charset=ISO-8859-1 ...
6. Ocean acidification
National Research Council Canada - National Science Library
Gattuso, J.P; Hansson, L
2011-01-01
The fate of much of the CO 2 we produce will be to enter the ocean. In a sense, we are fortunate that ocean water is endowed with the capacity to absorb far more CO 2 per litre than were it salt free...
7. Carbonate preservation during the 'mystery interval' in the northern Indian Ocean
Digital Repository Service at National Institute of Oceanography (India)
Naik, S.S.; Naidu, P.D.
maximum is a feature noted across the world oceans and considered to signify carbonate preservation, although it is missing from many sediment cores from the eastern equatorial Pacific, tropical Atlantic and subtropical Indian Ocean The carbonate...
8. Preparation of Sandy Soil Stabilizer for Roads Based on Radiation Modified Polymer Composite
International Nuclear Information System (INIS)
Elnahas, H.H.
2016-01-01
Radiation modified polymer composite (RMPC) was studied to build an extremely durable sandy road, construct a trail or bath, or control dust and erosion. A dilute solution of composite binds sandy soil fines through a coagulation bonding process. The result is a dense soil structure that has superior resistance to cracks and water penetration and can also solve erosion control problems. In erosion control applications, diluted composite is merely sprayed into sandy soil without compaction, effectively sealing the surface to prevent air-born dust or deterioration from erosion. The prepared composite has an elastic and melt-able film formation that imparts thermal compacting to the stabilized sandy soil after full dryness for sandy road leveling, repairing and restoration processes. The prepared composite is environmentally economical when compared with traditional sandy soil stabilizing (SSS) or sealing methods.
9. Microbial sewage contamination associated with Superstorm Sandy flooding in New York City
Science.gov (United States)
O'Mullan, G.; Dueker, M.; Sahajpal, R.; Juhl, A. R.
2013-05-01
The lower Hudson River Estuary commonly experiences degraded water quality following precipitation events due to the influence of combined sewer overflows. During Super-storm Sandy large scale flooding occurred in many waterfront areas of New York City, including neighborhoods bordering the Gowanus Canal and Newtown Creek Superfund sites known to frequently contain high levels of sewage associated bacteria. Water, sediment, and surface swab samples were collected from Newtown Creek and Gowanus Canal flood impacted streets and basements in the days following the storm, along with samples from the local waterways. Samples were enumerated for the sewage indicating bacterium, Enterococcus, and DNA was extracted and amplified for 16S ribosomal rRNA gene sequence analysis. Waterways were found to have relatively low levels of sewage contamination in the days following the storm. In contrast, much higher levels of Enterococci were detected in basement and storm debris samples and these bacteria were found to persist for many weeks in laboratory incubations. These data suggest that substantial sewage contamination occurred in some flood impacted New York City neighborhoods and that the environmental persistence of flood water associated microbes requires additional study and management attention.
10. Online Media Use and Adoption by Hurricane Sandy Affected Fire and Police Departments
OpenAIRE
Chauhan, Apoorva
2014-01-01
In this thesis work, I examine the use and adoption of online communication media by 840 fire and police departments that were affected by the 2012 Hurricane Sandy. I began by exploring how and why these fire and police departments used (or did not use) online media to communicate with the public during Hurricane Sandy. Results show that fire and police departments used online media during Hurricane Sandy to give timely and relevant information to the public about things such as evacuations, ...
11. Shallow Water Habitat Mapping in Cape Cod National Seashore: A Post-Hurricane Sandy Study
Science.gov (United States)
Borrelli, M.; Smith, T.; Legare, B.; Mittermayr, A.
2017-12-01
Hurricane Sandy had a dramatic impact along coastal areas in proximity to landfall in late October 2012, and those impacts have been well-documented in terrestrial coastal settings. However, due to the lack of data on submerged marine habitats, similar subtidal impact studies have been limited. This study, one of four contemporaneous studies commissioned by the US National Park Service, developed maps of submerged shallow water marine habitats in and around Cape Cod National Seashore, Massachusetts. All four studies used similar methods of data collection, processing and analysis for the production of habitat maps. One of the motivations for the larger study conducted in the four coastal parks was to provide park managers with a baseline inventory of submerged marine habitats, against which to measure change after future storm events and other natural and anthropogenic phenomena. In this study data from a phase-measuring sidescan sonar, bottom grab samples, seismic reflection profiling, and sediment coring were all used to develop submerged marine habitat maps using the Coastal and Marine Ecological Classification Standard (CMECS). Vessel-based acoustic surveys (n = 76) were conducted in extreme shallow water across four embayments from 2014-2016. Sidescan sonar imagery covering 83.37 km2 was collected, and within that area, 49.53 km2 of co-located bathymetric data were collected with a mean depth of 4.00 m. Bottom grab samples (n = 476) to sample macroinvertebrates and sediments (along with other water column and habitat data) were collected, and these data were used along with the geophysical and coring data to develop final habitat maps using the CMECS framework.
12. Longitudinal Impact of Hurricane Sandy Exposure on Mental Health Symptoms
Directory of Open Access Journals (Sweden)
Rebecca M. Schwartz
2017-08-01
Full Text Available Hurricane Sandy hit the eastern coast of the United States in October 2012, causing billions of dollars in damage and acute physical and mental health problems. The long-term mental health consequences of the storm and their predictors have not been studied. New York City and Long Island residents completed questionnaires regarding their initial Hurricane Sandy exposure and mental health symptoms at baseline and 1 year later (N = 130. There were statistically significant decreases in anxiety scores (mean difference = −0.33, p < 0.01 and post-traumatic stress disorder (PTSD scores (mean difference = −1.98, p = 0.001 between baseline and follow-up. Experiencing a combination of personal and property damage was positively associated with long-term PTSD symptoms (ORadj 1.2, 95% CI [1.1–1.4] but not with anxiety or depression. Having anxiety, depression, or PTSD at baseline was a significant predictor of persistent anxiety (ORadj 2.8 95% CI [1.1–6.8], depression (ORadj 7.4 95% CI [2.3–24.1 and PTSD (ORadj 4.1 95% CI [1.1–14.6] at follow-up. Exposure to Hurricane Sandy has an impact on PTSD symptoms that persists over time. Given the likelihood of more frequent and intense hurricanes due to climate change, future hurricane recovery efforts must consider the long-term effects of hurricane exposure on mental health, especially on PTSD, when providing appropriate assistance and treatment.
13. Sedimentological and radiochemical characteristics of marsh deposits from Assateague Island and the adjacent vicinity, Maryland and Virginia, following Hurricane Sandy
Science.gov (United States)
Smith, Christopher G.; Marot, Marci E.; Ellis, Alisha M.; Wheaton, Cathryn J.; Bernier, Julie C.; Adams, C. Scott
2015-09-15
The effect of tropical and extratropical cyclones on coastal wetlands and marshes is highly variable and depends on a number of climatic, geologic, and physical variables. The impacts of storms can be either positive or negative with respect to the wetland and marsh ecosystems. Small to moderate amounts of inorganic sediment added to the marsh surface during storms or other events help to abate pressure from sea-level rise. However, if the volume of sediment is large and the resulting deposits are thick, the organic substrate may compact causing submergence and a loss in elevation. Similarly, thick deposits of coarse inorganic sediment may also alter the hydrology of the site and impede vegetative processes. Alternative impacts associated with storms include shoreline erosion at the marsh edge as well as potential emergence. Evaluating the outcome of these various responses and potential long-term implications is possible from a systematic assessment of both historical and recent event deposits. A study was conducted by the U.S. Geological Survey to assess the sedimentological and radiochemical characteristics of marsh deposits from Assateague Island and areas around Chincoteague Bay, Maryland and Virginia, following Hurricane Sandy in 2012. The objectives of this study were to (1) characterize the surficial sediment of the relict to recent washover fans and back-barrier marshes in the study area, and (2) characterize the sediment of six marsh cores from the back-barrier marshes and a single marsh island core near the mainland. These geologic data will be integrated with other remote sensing data collected along Assateague Island in Maryland and Virginia and assimilated into an assessment of coastal wetland response to storms.
14. The effects of Hurricane Sandy on trauma center admissions.
Science.gov (United States)
Curran, T; Bogdanovski, D A; Hicks, A S; Bilaniuk, J W; Adams, J M; Siegel, B K; DiFazio, L T; Durling-Grover, R; Nemeth, Z H
2018-02-01
Hurricane Sandy was a particularly unusual storm with regard to both size and location of landfall. The storm landed in New Jersey, which is unusual for a tropical storm of such scale, and created hazardous conditions which caused injury to residents during the storm and in the months following. This study aims to describe differences in trauma center admissions and patterns of injury during this time period when compared to a period with no such storm. Data were collected for this study from patients who were admitted to the trauma center at Morristown Medical Center during Hurricane Sandy or the ensuing cleanup efforts (patients admitted between 29 October 2012 and 27 December 2012) as well as a control group consisting of all patients admitted to the trauma center between 29 October 2013 and 27 December 2013. Patient information was collected to compare the admissions of the trauma center during the period of the storm and cleanup to the control period. A total of 419 cases were identified in the storm and cleanup period. 427 were identified for the control. Striking injuries were more common in the storm and cleanup group by 266.7% (p = 0.0107); cuts were more common by 650.8% (p = 0.0044). Medical records indicate that many of these injuries were caused by Hurricane Sandy. Self-inflicted injuries were more common by 301.3% (p = 0.0294). There were no significant differences in the total number of patients, mortality, or injury severity score between the two cohorts. The data we have collected show that the conditions caused by Hurricane Sandy and the following cleanup had a significant effect on injury patterns, with more patients having been injured by being struck by falling or thrown objects, cut while using tools, or causing self-inflicted injuries. These changes, particularly during the cleanup period, are indicative of environmental changes following the storm which increase these risks of injury.
15. Dynamic compaction with high energy of sandy hydraulic fills
Directory of Open Access Journals (Sweden)
Khelalfa Houssam
2017-09-01
Full Text Available A case study about the adoption of the dynamic compaction technique with high energy in a sandy hydraulic fill is presented. The feasibility of this technique to ensure the stability of the caisson workshop and to minimize the risk of liquefaction during manufacture. This Article is interested to establish diagnostic of dynamic compaction test, basing on the results of SPT tests and quality control as well as the details of work of compaction and the properties of filling materials. A theory of soil response to a high-energy impact during dynamic compaction is proposed.
16. Hospital emergency preparedness and response during Superstorm Sandy.
Science.gov (United States)
2015-01-01
This article presents the findings of a report by the HHS Office of Inspector General (OIG) on the performance of 172 Medicare-certified hospitals in the New York Metropolitan Area before, during, and after Sandy. It makes recommendations on how to close gaps that were found in emergency planning and execution for a disaster of this magnitude. To download the complete 40-page report and a Podcast based on it, go to http://oig.hhs.gov/oei/ reports/oei-06-13-00260. asp.
17. Remediation of Diesel Fuel Contaminated Sandy Soil using Ultrasonic Waves
Directory of Open Access Journals (Sweden)
Wulandari P.S.
2010-01-01
Full Text Available Ultrasonic cleaning has been used in industry for some time, but the application of ultrasonic cleaning in contaminated soil is just recently received considerable attention, it is a very new technique, especially in Indonesia. An ultrasonic cleaner works mostly by energy released from the collapse of millions of microscopic cavitations near the dirty surface. This paper investigates the use of ultrasonic wave to enhance remediation of diesel fuel contaminated sandy soil considering the ultrasonic power, soil particle size, soil density, water flow rate, and duration of ultrasonic waves application.
18. Ocean uptake of carbon dioxide
International Nuclear Information System (INIS)
Peng, Tsung-Hung; Takahashi, Taro
1993-01-01
Factors controlling the capacity of the ocean for taking up anthropogenic C0 2 include carbon chemistry, distribution of alkalinity, pCO 2 and total concentration of dissolved C0 2 , sea-air pCO 2 difference, gas exchange rate across the sea-air interface, biological carbon pump, ocean water circulation and mixing, and dissolution of carbonate in deep sea sediments. A general review of these processes is given and models of ocean-atmosphere system based on our understanding of these regulating processes axe used to estimate the magnitude of C0 2 uptake by the ocean. We conclude that the ocean can absorb up to 35% of the fossil fuel emission. Direct measurements show that 55% Of C0 2 from fossil fuel burning remains in the atmosphere. The remaining 10% is not accounted for by atmospheric increases and ocean uptake. In addition, it is estimated that an amount equivalent to 30% of recent annual fossil fuel emissions is released into the atmosphere as a result of deforestation and farming. To balance global carbon budget, a sizable carbon sink besides the ocean is needed. Storage of carbon in terrestrial biosphere as a result of C0 2 fertilization is a potential candidate for such missing carbon sinks
19. Recent coastal evolution in a carbonate sandy environments and relation to beach ridge formation: the case of Anegada, British Virgin Islands
Science.gov (United States)
Cescon, Anna Lisa; Cooper, J. Andrew G.; Jackson, Derek W. T.
2014-05-01
In a changing climate context coastal areas will be affected by more frequent extreme events. Understanding the relationship between extreme events and coastal geomorphic response is critical to future adaptation plans. Beach ridge landforms commonly identified as hurricane deposits along tropical coasts in Australia and in the Caribbean Sea. However their formative processes in such environments are still not well understood. In particular, the role of different extreme wave events (storm waves, tsunami waves and extreme swell), in generating beach ridges is critical to their use as palaeotempestology archives. Anegada Island is a carbonate platform situated in the British Virgin Island between the Atlantic Ocean and the Caribbean Sea. Pleistocene in age, Anegada is surrounded by the Horseshoe fringing coral reef. Two Holocene sandy beach ridge plains are present on the western part of the island. The north beach ridge plain is Atlantic facing and has at least 30 ridges; the south beach ridge plain is Caribbean Sea facing and contains 10 ridges. Historical aerial photos enabled the shoreline evolution from 1953 to 2012 to be studied. Three different coastal domains are associate with the beach ridge plains: strong east-west longshore transport affects the north coastline, the south-west coastline from West End to Pomato Point represents an export corridor for these sediments and finally, along the southern coastline, from Pomato Point to Settling Point the area presents a depositional zone with little to no change in the last 70 years. The link between the extreme wave events that have affected Anegada Island in the last 70 years and beach ridge creation is discussed. Hurricane Donna crossed over Anegada Island in 1960: its geomorphological signature is tracked in the shoreline change analysis and its implication in beach ridge formation is discussed. Anegada Island has also been impacted by tsunami waves (Atwater et al., 2012) and a comparative discussion of the
20. Ocean energy
International Nuclear Information System (INIS)
2006-01-01
This annual evaluation is a synthesis of works published in 2006. Comparisons are presented between the wind power performances and European Commission White Paper and Biomass action plan objectives. The sector covers the energy exploitation of all energy flows specifically supplied by the seas and oceans. At present, most efforts in both research and development and in experimental implementation are concentrated on tidal currents and wave power. 90% of today worldwide ocean energy production is represented by a single site: the Rance Tidal Power Plant. Ocean energies must face up two challenges: progress has to be made in finalizing and perfecting technologies and costs must be brought under control. (A.L.B.)
1. Predicting the Storm Surge Threat of Hurricane Sandy with the National Weather Service SLOSH Model
Directory of Open Access Journals (Sweden)
Cristina Forbes
2014-05-01
Full Text Available Numerical simulations of the storm tide that flooded the US Atlantic coastline during Hurricane Sandy (2012 are carried out using the National Weather Service (NWS Sea Lakes and Overland Surges from Hurricanes (SLOSH storm surge prediction model to quantify its ability to replicate the height, timing, evolution and extent of the water that was driven ashore by this large, destructive storm. Recent upgrades to the numerical model, including the incorporation of astronomical tides, are described and simulations with and without these upgrades are contrasted to assess their contributions to the increase in forecast accuracy. It is shown, through comprehensive verifications of SLOSH simulation results against peak water surface elevations measured at the National Oceanic and Atmospheric Administration (NOAA tide gauge stations, by storm surge sensors deployed and hundreds of high water marks collected by the U.S. Geological Survey (USGS, that the SLOSH-simulated water levels at 71% (89% of the data measurement locations have less than 20% (30% relative error. The RMS error between observed and modeled peak water levels is 0.47 m. In addition, the model’s extreme computational efficiency enables it to run large, automated ensembles of predictions in real-time to account for the high variability that can occur in tropical cyclone forecasts, thus furnishing a range of values for the predicted storm surge and inundation threat.
2. Modelling wind forced bedforms on a sandy beach
NARCIS (Netherlands)
de Vries, S.; Van Thiel de Vries, J.; Ruessink, B.G.
2013-01-01
This paper aims to conceptually simulate observed spatial and temporal variability in aeolian sediment transport rates, erosion and deposition on a beach. Traditional strategies of modeling aeolian sediment transport rates do not account for supply limitations that are common on natural beaches. A
3. New York Bight sub-estuaries Study following Hurricane Sandy
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — Polychlorinated biphenyls (PCBs), organochlorine pesticides, polybrominated diphenyl ethers (PBDEs), polycyclic aromatic hydrocarbons (PAHs) and alkylated PAHs,...
4. Impact of Offshore Wind Energy Plants on the Soil Mechanical Behaviour of Sandy Seafloors
Science.gov (United States)
Stark, Nina; Lambers-Huesmann, Maria; Zeiler, Manfred; Zoellner, Christian; Kopf, Achim
2010-05-01
Over the last decade, wind energy has become an important renewable energy source. Especially, the installation of offshore windfarms offers additional space and higher average wind speeds than the well-established windfarms onshore. Certainly, the construction of offshore wind turbines has an impact on the environment. In the framework of the Research at Alpha VEntus (RAVE) project in the German offshore wind energy farm Alpha Ventus (north of the island Borkum in water depths of about 30 m) a research plan to investigate the environmental impact had been put into place. An ongoing study focuses on the changes in soil mechanics of the seafloor close to the foundations and the development of scour. Here, we present results of the first geotechnical investigations after construction of the plants (ca. 1 - 6 months) compared to geotechnical measurements prior to construction. To study the soil mechanical behaviour of the sand, sediment samples from about thirty different positions were measured in the laboratory to deliver, e.g., grain size (0.063 - 0.3 mm), friction angles (~ 32°), unit weight (~ 19.9 kN/m³) and void ratios (~ 0.81). For acoustic visualisation, side-scan-sonar (towed and stationary) and multibeam-echosounders (hull mounted) were used. Data show a flat, homogenous seafloor prior to windmill erection, and scouring effects at and in the vicinity of the foundations afterwards. Geotechnical in-situ measurements were carried out using a standard dynamic Cone Penetration Testing lance covering the whole windfarm area excluding areas in a radius free-fall penetrometer Nimrod was deployed at the same spots, and furthermore, in the areas close to the tripod foundations (down to a distance of ~ 5 m from the central pile). Before construction, CPT as well as Nimrod deployments confirm a flat, homogenous sandy area with tip resistance values ranging from 1200 - 1600 kPa (CPT with a mass of ~ 100 kg and an impact velocity of ~ 1 m/s) and quasi-static bearing
5. Anthropopression markers in lake bottom sediments
Science.gov (United States)
2014-05-01
top layer of sediments consists of organic sediment ("sapropel" type). The littoral zone is dominated by sandy material from the shores denudation. In river mouths sandy deltas are formed. The most contaminated sediments are deposited in the central pool, which is a natural trap for the substances flowing with the river that is draining wastewaters from urban areas. At its mouth the sediment samples were significantly contaminated with chromium, zinc, cadmium, copper, nickel, lead and mercury. A high content of total phosphorus was also detected. A different role is played by a large river flowing through the lake. While flushing the sediments it reduces their pollution. The lowest content of markers was detected in headwater areas and in littoral zones exposed to waving.
6. Coastline evolution of Portuguese low-lying sandy coast in the last 50 years: an integrated approach
Science.gov (United States)
Ponte Lira, Cristina; Nobre Silva, Ana; Taborda, Rui; Freire de Andrade, Cesar
2016-06-01
Regional/national-scale information on coastline rates of change and trends is extremely valuable, but these studies are scarce. A widely accepted standardized methodology for analysing long-term coastline change has been difficult to achieve, but it is essential to conduct an integrated and holistic approach to coastline evolution and hence support coastal management actions. Additionally, databases providing knowledge on coastline evolution are of key importance to support both coastal management experts and users.The main objective of this work is to present the first systematic, national-scale and consistent long-term coastline evolution data of Portuguese mainland low-lying sandy coasts.The methodology used quantifies coastline evolution using a unique and robust coastline indicator (the foredune toe), which is independent of short-term changes.The dataset presented comprises (1) two polyline sets, mapping the 1958 and 2010 sandy beach-dune system coastline, both optimized for working at 1 : 50 000 scale or smaller; (2) one polyline set representing long-term change rates between 1958 and 2010, each estimated at 250 m; and (3) a table with minimum, maximum and mean of evolution rates for sandy beach-dune system coastline. All science data produced here are openly accessible at https://doi.pangaea.de/10.1594/PANGAEA.859136 and can be used in other studies.Results show beach erosion as the dominant trend, with a mean change rate of -0.24 ± 0.01 m year-1 for all mainland Portuguese beach-dune systems. Although erosion is dominant, this evolution is variable in signal and magnitude in different coastal sediment cells and also within each cell. The most relevant beach erosion issues were found in the coastal stretches of Espinho-Torreira and Costa Nova-Praia de Mira, Cova da Gala-Leirosa, and Cova do Vapor-Costa da Caparica. The coastal segments Minho River-Nazaré and Costa da Caparica adjacent to the coast exhibit a history of major human interventions
7. Ocean Acidification
Science.gov (United States)
Ocean and coastal acidification is an emerging issue caused by increasing amounts of carbon dioxide being absorbed by seawater. Changing seawater chemistry impacts marine life, ecosystem services, and humans. Learn what EPA is doing and what you can do.
8. Ocean transportation
National Research Council Canada - National Science Library
Frankel, Ernst G; Marcus, Henry S
1973-01-01
.... The discussion of technology considers the ocean transportation system as a whole, and the composite subsystems such as hull, outfit, propulsion, cargo handling, automation, and control and interface technology...
9. Ocean transportation
National Research Council Canada - National Science Library
Frankel, Ernst G; Marcus, Henry S
1973-01-01
.... In ocean transportation economics we present investment and operating costs as well as the results of a study of financing of shipping. Similarly, a discussion of government aid to shipping is presented.
10. Ocean Color
Data.gov (United States)
National Aeronautics and Space Administration — Satellite-derived Ocean Color Data sets from historical and currently operational NASA and International Satellite missions including the NASA Coastal Zone Color...
11. The role of the sediment barrier
International Nuclear Information System (INIS)
Freeman, T.J.; Schultheiss, P.J.; Searle, R.C.; Sills, G.C.; Toolan, F.E.
1989-01-01
The conference 'Disposal of radioactive wastes in seabed sediments' was organized by the Society for Underwater Technology to review the potential of certain seabed sediments to provide a long-term containment for radioactive wastes. Its objectives were to assess: (1) what has been learned about the properties and nature of the sediments of the deep ocean; (2) the merits and demerits of the conceptual techniques that have been developed to dispose of waste; and (3) whether what has been learned about deep ocean disposal has any relevance to other areas of marine science. This chapter introduces the subject matter of the conference in the framework of the international research programme and discusses what has been learned about the role of the sediment barrier. (author)
12. Ocean Quality
OpenAIRE
Brevik, Roy Schjølberg; Jordheim, Nikolai; Martinsen, John Christian; Labori, Aleksander; Torjul, Aleksander Lelis
2017-01-01
Bacheloroppgave i Internasjonal Markedsføring fra ESADE i Spania, 2017 In this thesis we were going to answer the problem definition “which segments in the Spanish market should Ocean Quality target”. By doing so we started to collect data from secondary sources in order to find information about the industry Ocean Quality are operating in. After conducting the secondary research, we still lacked essential information about the existing competition in the aquaculture industry o...
13. Quality Assurance After a Natural Disaster: Lessons from Hurricane Sandy.
Science.gov (United States)
Dickerson, Collin; Hsu, Yanshen; Mendoza, Sandra; Osman, Iman; Ogilvie, Jennifer; Patel, Kepal; Moreira, Andre L
2018-04-01
Biospecimen quality can vary depending on many pre- and post-collection variables. In this study, we consider a natural disaster as a post-collection variable that may have compromised the quality of frozen tissue specimens. To investigate this possible link, we compared the quality of nucleic acids, the level of antigenicity, and the preservation of histology from frozen specimens collected before and after the power outage caused by Hurricane Sandy. To analyze nucleic acid quality, we extracted both DNA and RNA and performed capillary electrophoresis to compare the quality and concentrations of the nucleic acids. To compare antigenicity, frozen sections were cut and immunostained for thyroid transcription factor 1 (TTF-1), a nuclear transcription protein commonly used as a diagnostic biomarker for multiple cancer types, including thyroid and lung cancers. Positive expression of TTF-1, as noted by homogenous nuclear staining, would demonstrate that the TTF-1 proteins could still bind antibodies and, therefore, that these proteins were not significantly degraded. Furthermore, representative frozen sections stained with hematoxylin and eosin were also assessed qualitatively by a trained pathologist to examine any possible histologic aberrations. Due to the similar quality of the tissue samples collected before and after the storm, Hurricane Sandy had no discernable effect on the quality of frozen specimens, and these specimens exposed to the natural disaster are still valuable research tools.
14. Measurement of earthquake-induced shear strain in sandy gravel
International Nuclear Information System (INIS)
Ohkawa, I.; Futaki, M.; Yamanouchi, H.
1989-01-01
The nuclear power reactor buildings have been constructed on the hard rock ground formed in or before the Tertiary in Japan. This is mainly because the nuclear reactor building is much heavier than the common buildings and requires a large bearing capacity of the underlying soil deposit, and additionally the excessive deformation in soil deposit might cause damage in reactor building and subsequently cause the malfunction of the internal important facilities. Another reason is that the Quaternary soil deposit is not fully known with respect to its dynamic property. The gravel, and the sandy gravel, the representative soils of the Quaternary, have been believed to be suitable soil deposits to support the foundation of a common building, although the soils have rarely been investigated so closely on their physical properties quantitatively. In this paper, the dynamic deformability, i.e., the shear stress-strain relationship of the Quaternary diluvial soil deposit is examined through the earthquake ground motion measurement using accelerometers, pore-pressure meters, the specific devices developed in this research work. The objective soil deposit in this research is the sandy gravel of the diluvial and the alluvial
15. Superstorm Sandy and the academic achievement of university students.
Science.gov (United States)
Doyle, Matthew D; Lockwood, Brian; Comiskey, John G
2017-10-01
16. Transport processes near coastal ocean outfalls
Science.gov (United States)
Noble, M.A.; Sherwood, C.R.; Lee, Hooi-Ling; Xu, Jie; Dartnell, P.; Robertson, G.; Martini, M.
2001-01-01
The central Southern California Bight is an urbanized coastal ocean where complex topography and largescale atmospheric and oceanographic forcing has led to numerous sediment-distribution patterns. Two large embayments, Santa Monica and San Pedro Bays, are connected by the short, very narrow shelf off the Palos Verdes peninsula. Ocean-sewage outfalls are located in the middle of Santa Monica Bay, on the Palos Verdes shelf and at the southeastern edge of San Pedro Bay. In 1992, the US Geological Survey, together with allied agencies, began a series of programs to determine the dominant processes that transport sediment and associated pollutants near the three ocean outfalls. As part of these programs, arrays of instrumented moorings that monitor currents, waves, water clarity, water density and collect resuspended materials were deployed on the continental shelf and slope information was also collected on the sediment and contaminant distributions in the region. The data and models developed for the Palos Verdes shelf suggest that the large reservoir of DDT/DDE in the coastal ocean sediments will continue to be exhumed and transported along the shelf for a long time. On the Santa Monica shelf, very large internal waves, or bores, are generated at the shelf break. The near-bottom currents associated with these waves sweep sediments and the associated contaminants from the shelf onto the continental slope. A new program underway on the San Pedro shelf will determine if water and contaminants from a nearby ocean outfall are transported to the local beaches by coastal ocean processes. The large variety of processes found that transport sediments and contaminants in this small region of the continental margin suggest that in regions with complex topography, local processes change markedly over small spatial scales. One cannot necessarily infer that the dominant transport processes will be similar even in adjacent regions.
17. A Coordinated USGS Science Response to Hurricane Sandy
Science.gov (United States)
Jones, S.; Buxton, H. T.; Andersen, M.; Dean, T.; Focazio, M. J.; Haines, J.; Hainly, R. A.
2013-12-01
In late October 2012, Hurricane Sandy came ashore during a spring high tide on the New Jersey coastline, delivering hurricane-force winds, storm tides exceeding 19 feet, driving rain, and plummeting temperatures. Hurricane Sandy resulted in 72 direct fatalities in the mid-Atlantic and northeastern United States, and widespread and substantial physical, environmental, ecological, social, and economic impacts estimated at near \$50 billion. Before the landfall of Hurricane Sandy, the USGS provided forecasts of potential coastal change; collected oblique aerial photography of pre-storm coastal morphology; deployed storm-surge sensors, rapid-deployment streamgages, wave sensors, and barometric pressure sensors; conducted Light Detection and Ranging (lidar) aerial topographic surveys of coastal areas; and issued a landslide alert for landslide prone areas. During the storm, Tidal Telemetry Networks provided real-time water-level information along the coast. Long-term networks and rapid-deployment real-time streamgages and water-quality monitors tracked river levels and changes in water quality. Immediately after the storm, the USGS serviced real-time instrumentation, retrieved data from over 140 storm-surge sensors, and collected other essential environmental data, including more than 830 high-water marks mapping the extent and elevation of the storm surge. Post-storm lidar surveys documented storm impacts to coastal barriers informing response and recovery and providing a new baseline to assess vulnerability of the reconfigured coast. The USGS Hazard Data Distribution System served storm-related information from many agencies on the Internet on a daily basis. Immediately following Hurricane Sandy the USGS developed a science plan, 'Meeting the Science Needs of the Nation in the Wake of Hurricane Sandy-A U.S. Geological Survey Science Plan for Support of Restoration and Recovery'. The plan will ensure continuing coordination of internal USGS activities as well as
18. ACHIEVEMENTS AND PERSPECTIVES ON STONE FRUIT GROWING ON SANDY SOILS
Directory of Open Access Journals (Sweden)
Anica Durău
2012-01-01
Full Text Available Climatic conditions in the sandy soils of southern Oltenia encourage cultivation of tree species in terms of applying specific technologies. Possibility of poor sandy soils fertile capitalization, earliness in 7- 10 days of fruit ripening , high yields and quality are the main factors supporting the development of fruit growing in the sandy soils of southern Oltenia. The main objectives of the research were to CCDCPN Dăbuleni. Establish and improve stone fruit species assortment, adapted to the stress of the sandy soils, establishment and evaluation of the influence of stress on trees and their influence on the size and quality of production, development of technological links (planting distances, forms management, fertilization, getting high and consistent annual production of high quality, containing low as pesticide residues, to establish a integrated health control program of the trees with emphasis on biotechnical. Research has shown good stone species behavior, and their recommended proportion is 75% of all fruit trees (peach 36%, 14% apricot, plum15%, sweet and sour cherry fruit growing 10% of the total area. Results on peach varieties revealed: ’Redhaven’, ’Suncrest’, ’Loring’ with yields ranging from (24.8 t / ha to 29.0 t/ha with maturation period from July to August, and varieties ’NJ 244’, ’Fayette’, ’Flacara’ with productions ranging from (19.7 t / ha to 23.0 t/ha with maturation period from August to September. The sweet cherry varieties ’Van’, ’Rainier’, ’Stella’, with yields ranging from 17. 2 to 24.4 t / ha. In the range studied sour cherry were found ’Oblacinska’ varieties of 11.0 t / ha, ’Cernokaia’ with 10.5 t / ha, ’Schatten Morelle’ with 9.1 t / ha. Optimum planting density and shape of the peach crown found that the highest yields of fruit are produced in the form of vertical cordon crown, with values ranging from 15.9 t / ha at a distance of 2 m, 10.3 t / ha at a distance
19. The impact of Hurricane Sandy on the shoreface and inner shelf of Fire Island, New York: large bedform migration but limited erosion
Science.gov (United States)
Goff, John A.; Flood, Roger D.; Austin, James A.; Schwab, William C.; Christensen, Beth A.; Browne, Cassandra M.; Denny, Jane F.; Baldwin, Wayne E.
2015-01-01
We investigate the impact of superstorm Sandy on the lower shoreface and inner shelf offshore the barrier island system of Fire Island, NY using before-and-after surveys involving swath bathymetry, backscatter and CHIRP acoustic reflection data. As sea level rises over the long term, the shoreface and inner shelf are eroded as barrier islands migrate landward; large storms like Sandy are thought to be a primary driver of this largely evolutionary process. The “before” data were collected in 2011 by the U.S. Geological Survey as part of a long-term investigation of the Fire Island barrier system. The “after” data were collected in January, 2013, ~two months after the storm. Surprisingly, no widespread erosional event was observed. Rather, the primary impact of Sandy on the shoreface and inner shelf was to force migration of major bedforms (sand ridges and sorted bedforms) 10’s of meters WSW alongshore, decreasing in migration distance with increasing water depth. Although greater in rate, this migratory behavior is no different than observations made over the 15-year span prior to the 2011 survey. Stratigraphic observations of buried, offshore-thinning fluvial channels indicate that long-term erosion of older sediments is focused in water depths ranging from the base of the shoreface (~13–16 m) to ~21 m on the inner shelf, which is coincident with the range of depth over which sand ridges and sorted bedforms migrated in response to Sandy. We hypothesize that bedform migration regulates erosion over these water depths and controls the formation of a widely observed transgressive ravinement; focusing erosion of older material occurs at the base of the stoss (upcurrent) flank of the bedforms. Secondary storm impacts include the formation of ephemeral hummocky bedforms and the deposition of a mud event layer.
20. The impact of Hurricane Sandy on the shoreface and inner shelf of Fire Island, New York: Large bedform migration but limited erosion
Science.gov (United States)
Goff, John A.; Flood, Roger D.; Austin, James A., Jr.; Schwab, William C.; Christensen, Beth; Browne, Cassandra M.; Denny, Jane F.; Baldwin, Wayne E.
2015-04-01
We investigate the impact of superstorm Sandy on the lower shoreface and inner shelf offshore the barrier island system of Fire Island, NY using before-and-after surveys involving swath bathymetry, backscatter and CHIRP acoustic reflection data. As sea level rises over the long term, the shoreface and inner shelf are eroded as barrier islands migrate landward; large storms like Sandy are thought to be a primary driver of this largely evolutionary process. The "before" data were collected in 2011 by the U.S. Geological Survey as part of a long-term investigation of the Fire Island barrier system. The "after" data were collected in January, 2013, ~two months after the storm. Surprisingly, no widespread erosional event was observed. Rather, the primary impact of Sandy on the shoreface and inner shelf was to force migration of major bedforms (sand ridges and sorted bedforms) 10's of meters WSW alongshore, decreasing in migration distance with increasing water depth. Although greater in rate, this migratory behavior is no different than observations made over the 15-year span prior to the 2011 survey. Stratigraphic observations of buried, offshore-thinning fluvial channels indicate that long-term erosion of older sediments is focused in water depths ranging from the base of the shoreface (~13-16 m) to ~21 m on the inner shelf, which is coincident with the range of depth over which sand ridges and sorted bedforms migrated in response to Sandy. We hypothesize that bedform migration regulates erosion over these water depths and controls the formation of a widely observed transgressive ravinement; focusing erosion of older material occurs at the base of the stoss (upcurrent) flank of the bedforms. Secondary storm impacts include the formation of ephemeral hummocky bedforms and the deposition of a mud event layer.
1. Morphodynamic Impacts of Hurricane Sandy on the Inner-shelf (Invited)
Science.gov (United States)
Trembanis, A. C.; Beaudoin, J. D.; DuVal, C.; Schmidt, V. E.; Mayer, L. A.
2013-12-01
Through the careful execution of precision high-resolution acoustic sonar surveys over the period of October 2012 through July 2013, we have obtained a unique set of high-resolution before and after storm measurements of seabed morphology and in situ hydrodynamic conditions (waves and currents) capturing the impact of the storm at an inner continental shelf field site known as the 'Redbird reef' (Raineault et al., 2013). Understanding the signature of this storm event is important for identifying the impacts of such events and for understanding the role that such events have in the transport of sediment and marine debris on the inner continental shelf. In order to understand and characterize the ripple dynamics and scour processes in an energetic, heterogeneous inner-shelf setting, a series of high-resolution geoacoustic surveys were conducted before and after Hurricane Sandy. Our overall goal is to improve our understanding of bedform dynamics and spatio-temporal length scales and defect densities through the application of a recently developed fingerprint algorithm technique (Skarke and Trembanis, 2011). Utilizing high-resolution swath sonar collected by an AUV and from surface vessel multibeam sonar, our study focuses both on bedforms in the vicinity of manmade seabed objects (e.g. shipwrecks and subway cars) and dynamic natural ripples on the inner-shelf in energetic coastal settings with application to critical military operations such as mine countermeasures. Seafloor mapping surveys were conducted both with a ship-mounted multibeam echosounder (200 kHz and 400 kHz) and an Autonomous Underwater Vehicle (AUV) configured with high-resolution side-scan sonar (900 and 1800 kHz) and a phase measuring bathymetric sonar (500 kHz). These geoacoustic surveys were further augmented with data collected by in situ instruments placed on the seabed that recorded measurements of waves and currents at the site before, during, and after the storm. Multibeam echosounder map of
2. Delaware Bay, Delaware Sediment Distribution 2003 to 2004
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — The area of coverage consists of 38 square miles of benthic habitat mapped from 2003 to 2004 along the middle to lower Delaware Bay Coast. The bottom sediment map...
3. The fate of fresh and stored 15N-labelled sheep urine and urea applied to a sandy and a sandy loam soil using different application strategies
DEFF Research Database (Denmark)
Sørensen, P.; Jensen, E.S.
1996-01-01
The fate of nitrogen from N-15-labelled sheep urine and urea applied to two soils was studied under field conditions. Labelled and stored urine equivalent to 204 kg N ha(-1) was either incorporated in soil or applied to the soil surface prior to sowing of Italian ryegrass (Lolium multiflorum L...... and soil was not significantly different for incorporated urine and urea. Almost all the supplied labelled N was accounted for in soil and herbage in the sandy loam soil, whereas 33-34% of the labelled N was unaccounted for in the sandy soil. When the stored urine was applied to the soil surface, 20...... was applied to growing ryegrass at the sandy loam soil, the immobilization of urine-derived N was significantly reduced compared to application prior to sowing. The results indicated that the net mineralization of urine N was similar to that of urea in the sandy soil, but only about 75% of the urine N was net...
4. The Indian Ocean nodule field: Geology and resource potential
Digital Repository Service at National Institute of Oceanography (India)
Mukhopadhyay, R.; Ghosh, A; Iyer, S.D.
This book briefly accounts for the physiography, geology, biology, physics and chemistry of the nodule field, and discusses in detail the aspects of structure, tectonic and volcanism in the field. The role of the ocean floor sediment that hosts...
5. Drift pumice in the central Indian Ocean Basin: Geochemical evidence
Digital Repository Service at National Institute of Oceanography (India)
Pattan, J.N.; Mudholkar, A.V.; JaiSankar, S.; Ilangovan, D.
Abundant white to light grey-coloured pumice without ferromanganese oxide coating occurs within the Quaternary sediments of the Central Indian Ocean Basin (CIOB). Two distinct groups of pumice are identified from their geochemical composition, which...
6. Interactive effects of vegetation and sediment properties on erosion of salt marshes in the Northern Adriatic Sea.
Science.gov (United States)
Lo, V B; Bouma, T J; van Belzen, J; Van Colen, C; Airoldi, L
2017-10-01
We investigated how lateral erosion control, measured by novel photogrammetry techniques, is modified by the presence of Spartina spp. vegetation, sediment grain size, and the nutrient status of salt marshes across 230 km of the Italian Northern Adriatic coastline. Spartina spp. vegetation reduced erosion across our study sites. The effect was more pronounced in sandy soils, where erosion was reduced by 80% compared to 17% in silty soils. Erosion resistance was also enhanced by Spartina spp. root biomass. In the absence of vegetation, erosion resistance was enhanced by silt content, with mean erosion 72% lower in silty vs. sandy soils. We found no relevant relationships with nutrient status, likely due to overall high nutrient concentrations and low C:N ratios across all sites. Our results contribute to quantifying coastal protection ecosystem services provided by salt marshes in both sandy and silty sediments. Copyright © 2017 Elsevier Ltd. All rights reserved.
7. Deep ocean model penetrator experiments
International Nuclear Information System (INIS)
Freeman, T.J.; Burdett, J.R.F.
1986-01-01
Preliminary trials of experimental model penetrators in the deep ocean have been conducted as an international collaborative exercise by participating members (national bodies and the CEC) of the Engineering Studies Task Group of the Nuclear Energy Agency's Seabed Working Group. This report describes and gives the results of these experiments, which were conducted at two deep ocean study areas in the Atlantic: Great Meteor East and the Nares Abyssal Plain. Velocity profiles of penetrators of differing dimensions and weights have been determined as they free-fell through the water column and impacted the sediment. These velocity profiles are used to determine the final embedment depth of the penetrators and the resistance to penetration offered by the sediment. The results are compared with predictions of embedment depth derived from elementary models of a penetrator impacting with a sediment. It is tentatively concluded that once the resistance to penetration offered by a sediment at a particular site has been determined, this quantity can be used to sucessfully predict the embedment that penetrators of differing sizes and weights would achieve at the same site
8. Coastal sediment dynamics in Spitsbergen
Science.gov (United States)
Deloffre, J.; Lafite, R.; Baltzer, A.; Marlin, C.; Delangle, E.; Dethleff, D.; Petit, F.
2010-12-01
In arctic knowledge on coastal sediment dynamics and sedimentary processes is limited. The studied area is located in the microtidal Kongsfjorden glacial fjord on the North-western coast of Spitsbergen in the Artic Ocean (79°N). In this area sediment contributions to the coastal zone is provided by small temporary rivers that flows into the fjord. The objectives of this study are to (i) assess the origin and fate of fine-grained particles (sea ice cover on sediment dynamics. The sampling strategy is based on characterization of sediment and SPM (grain-size, X-rays diffraction, SEM images, carbonates and organic matter contents) from the glacier to the coastal zone completed by a bottom-sediment map on the nearshore using side-scan sonar validated with Ekman binge sampling. River inputs (i.e. river plumes) to the coastal zone were punctually followed using CTD (conductivity, temperature, depth and turbidity) profiles. OBS (water level, temperature and turbidity) operating at high-frequency and during at least 1 years (including under sea ice cover) was settled at the mouth of rivers at 10m depth. In the coastal zone the fine-grained sediment deposit is limited to mud patches located at river mouths that originate the piedmont glacier. However a significant amount of sediment originates the coastal glacier located in the eastern part of the fjord via two processes: direct transfer and ice-drop. Results from turbidity measurements show that the sediment dynamics is controlled by river inputs in particular during melting period. During winter sediment resuspension can occurs directly linked to significant wind-events. When the sea ice cover is present (January to April) no sediment dynamics is observed. Sediment processes in the coastal zone of arctic fjords is significant however only a small amount of SPM that originates the river plume settles in the coastal zone; only the coarser material settles at the mouth of the river while the finer one is deposited further
9. Abrasive wear based predictive maintenance for systems operating in sandy conditions
NARCIS (Netherlands)
Woldman, M.; Tinga, T.; Heide, E. van der; Masen, M.A.
2015-01-01
Machines operating in sandy environments are damaged by the abrasive action of sand particles that enter the machine and become entrapped between components and contacting surfaces. In the case of the military services the combination of a sandy environment and the wide range of tasks to be
10. Measuring Sandy Bottom Dynamics by Exploiting Depth from Stereo Video Sequences
DEFF Research Database (Denmark)
Musumeci, Rosaria E.; Farinella, Giovanni M.; Foti, Enrico
2013-01-01
In this paper an imaging system for measuring sandy bottom dynamics is proposed. The system exploits stereo sequences and projected laser beams to build the 3D shape of the sandy bottom during time. The reconstruction is used by experts of the field to perform accurate measurements and analysis...
11. 33 CFR 80.170 - Sandy Hook, NJ to Tom's River, NJ.
Science.gov (United States)
2010-07-01
... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Sandy Hook, NJ to Tom's River, NJ. 80.170 Section 80.170 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY INTERNATIONAL NAVIGATION RULES COLREGS DEMARCATION LINES Atlantic Coast § 80.170 Sandy Hook, NJ to Tom's River...
12. 77 FR 74891 - Order Granting Exemptions From Certain Rules of Regulation SHO Related to Hurricane Sandy
Science.gov (United States)
2012-12-18
... Client Update on Superstorm Sandy--Current and Ongoing Operations as Markets Re-Open; Physical.../downloads/legal/imp_notices/2012/dtcc/z0033.pdf ; DTCC Client Update on Superstorm Sandy--Physical...://www.dtcc.com/downloads/legal/imp_notices/2012/dtcc/z0035.pdf ; `DTCC Client Update on Superstorm...
13. Fine-scale spatial distribution of plants and resources on a sandy soil in the Sahel
NARCIS (Netherlands)
Rietkerk, M.G.; Ouedraogo, T.; Kumar, L.; Sanou, S.; Langevelde, F. van; Kiema, A.; Koppel, J. van de; Andel, J. van; Hearne, J.; Skidmore, A.K.; Ridder, N. de; Stroosnijder, L.; Prins, H.H.T.
2002-01-01
We studied fine-scale spatial plant distribution in relation to the spatial distribution of erodible soil particles, organic matter, nutrients and soil water on a sandy to sandy loam soil in the Sahel. We hypothesized that the distribution of annual plants would be highly spatially autocorrelated
14. Although the benthic macrofauna of sandy environ- ments around ...
African Journals Online (AJOL)
spamer
the flood-tidal delta of the Nahoon Estuary and adjacent beach near East London on the south-east coast of South. Africa. Water content of sediments, temperature and exposure were identified as important .... Hermit crabs Diogenes brevi-.
15. Grain size analysis data collected by sediment corer and sediment grabber casts in the Chukchi sea from 1986-08-29 to 1987-10-07 (NODC Accession 9500158)
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — Grain size analysis data were collected by using sediment corer and sediment graber casts in the Chukchi Sea and NW Coast of Alaska by the Chukchi Sea. Data were...
16. Water Infiltration and Hydraulic Conductivity in Sandy Cambisols
DEFF Research Database (Denmark)
Bens, Oliver; Wahl, Niels Arne; Fischer, Holger
2006-01-01
from pure Scots pine stands towards pure European beech stands. The water infiltration capacity and hydraulic conductivity (K) of the investigated sandy-textured soils are low and very few macropores exist. Additionally these pores are marked by poor connectivity and therefore do not have any...... of the experimental soils. The results indicate clearly that soils play a crucial role for water retention and therefore, in overland flow prevention. There is a need to have more awareness on the intimate link between the land use and soil properties and their possible effects on flooding.......Soil hydrological properties like infiltration capacity and hydraulic conductivity have important consequences for hydrological properties of soils in river catchments and for flood risk prevention. They are dynamic properties due to varying land use management practices. The objective...
17. Performance of social network sensors during Hurricane Sandy.
Directory of Open Access Journals (Sweden)
Yury Kryvasheyeu
Full Text Available Information flow during catastrophic events is a critical aspect of disaster management. Modern communication platforms, in particular online social networks, provide an opportunity to study such flow and derive early-warning sensors, thus improving emergency preparedness and response. Performance of the social networks sensor method, based on topological and behavioral properties derived from the "friendship paradox", is studied here for over 50 million Twitter messages posted before, during, and after Hurricane Sandy. We find that differences in users' network centrality effectively translate into moderate awareness advantage (up to 26 hours; and that geo-location of users within or outside of the hurricane-affected area plays a significant role in determining the scale of such an advantage. Emotional response appears to be universal regardless of the position in the network topology, and displays characteristic, easily detectable patterns, opening a possibility to implement a simple "sentiment sensing" technique that can detect and locate disasters.
18. Hurricane Sandy: Shared Trauma and Therapist Self-Disclosure.
Science.gov (United States)
Rao, Nyapati; Mehra, Ashwin
2015-01-01
Hurricane Sandy was one of the most devastating storms to hit the United States in history. The impact of the hurricane included power outages, flooding in the New York City subway system and East River tunnels, disrupted communications, acute shortages of gasoline and food, and a death toll of 113 people. In addition, thousands of residences and businesses in New Jersey and New York were destroyed. This article chronicles the first author's personal and professional experiences as a survivor of the hurricane, more specifically in the dual roles of provider and trauma victim, involving informed self-disclosure with a patient who was also a victim of the hurricane. The general analytic framework of therapy is evaluated in the context of the shared trauma faced by patient and provider alike in the face of the hurricane, leading to important implications for future work on resilience and recovery for both the therapist and patient.
19. Performance of Social Network Sensors during Hurricane Sandy
Science.gov (United States)
Kryvasheyeu, Yury; Chen, Haohui; Moro, Esteban; Van Hentenryck, Pascal; Cebrian, Manuel
2015-01-01
Information flow during catastrophic events is a critical aspect of disaster management. Modern communication platforms, in particular online social networks, provide an opportunity to study such flow and derive early-warning sensors, thus improving emergency preparedness and response. Performance of the social networks sensor method, based on topological and behavioral properties derived from the “friendship paradox”, is studied here for over 50 million Twitter messages posted before, during, and after Hurricane Sandy. We find that differences in users’ network centrality effectively translate into moderate awareness advantage (up to 26 hours); and that geo-location of users within or outside of the hurricane-affected area plays a significant role in determining the scale of such an advantage. Emotional response appears to be universal regardless of the position in the network topology, and displays characteristic, easily detectable patterns, opening a possibility to implement a simple “sentiment sensing” technique that can detect and locate disasters. PMID:25692690
20. Oceans Past
DEFF Research Database (Denmark)
Based on research for the History of Marine Animal Populations project, Oceans Past examines the complex relationship our forebears had with the sea and the animals that inhabit it. It presents eleven studies ranging from fisheries and invasive species to offshore technology and the study of marine...... environmental history, bringing together the perspectives of historians and marine scientists to enhance understanding of ocean management of the past, present and future. In doing so, it also highlights the influence that changes in marine ecosystems have upon the politics, welfare and culture of human...
1. Ocean energy
International Nuclear Information System (INIS)
2009-01-01
There are 5 different ways of harnessing ocean energy: tides, swells, currents, osmotic pressure and deep water thermal gradients. The tidal power sector is the most mature. A single French site - The Rance tidal power station (240 MW) which was commissioned in 1966 produces 90% of the world's ocean energy. Smaller scale power stations operate around the world, 10 are operating in the European Union and 5 are being tested. Underwater generators and wave energy converters are expanding. In France a 1 km 2 sea test platform is planned for 2010. (A.C.)
2. Modelling of the mechanical behaviour of a shaly sediment during burial
Energy Technology Data Exchange (ETDEWEB)
Vandromme, R.; Parize, O.; Hadj Hassen, F.; Beaudoin, B. [Ecole nationale superieure des mines de Paris (CGES - Sedimentologie), 77 - Fontainebleau (France); Schneider, F. [Institut francais du petrole, 92 - Rueil Malmaison (France); Trouiller, A. [Agence Nationale pour la Gestion des Dechets Radioactifs (ANDRA), 92 - Chatenay Malabry (France)
2005-07-01
Early fractures of shaly formations can play a role in their stability and therefore, in their permeability (sealing). A mechanical knowledge of early fracturing is then necessary to determine the physical parameters which can explain them in order to allow the prediction of the future fractured zones; as well to imagine the possible fluid circulations between different hydrocarbon reservoirs during their exploitation as to evaluate the shaly materials that could constitute a site for radioactive waste confinement. The outcrops of Bevons, Nyons and Rosans in the South-East of France and those of the Numidian in Sicily, Tunisia, Morocco... allow the observation of fractures that have been fossilised by sandy injections fed by turbidity channels. Two types of injection are present: sills (horizontal) and dykes (vertical), dykes coming from sills. Their formation are either per ascensum (post-depositional) or per descensum, contemporary of the sand feeder installation. For the moment, we are interested in that last type of injection which represents the larger dyke systems we know. The ptygmatic folding of some per descensum dykes indicates that fracture and injection occurred when the sediments were compacting. Near the paleo-sea-floor, the fractures cut the lithologies without taking account of the different beds. On the contrary, at the bottom of the series, limy beds are cut perpendicularly whereas shaly beds are mostly cut obliquely. These observations indicate that at the time of the injection, the bottom of the series was made of a shaly-limy alternation whereas the top was constituted of more homogeneous materials. The sediments are differentiating during compaction. Original rheological properties of the superficial part of series cannot be measured today but are essential to improve our simulations. For that reason, research of some examples of mechanical data for a compacting sediment in marine environment was undertaken (Ocean Drilling Projects
3. Phosphorus distribution in sandy soil profile under drip irrigation system
International Nuclear Information System (INIS)
El-Gendy, R.W.; Rizk, M.A.; Abd El Moniem, M.; Abdel-Aziz, H.A.; Fahmi, A.E.
2009-01-01
This work aims at to studying the impact of irrigation water applied using drip irrigation system in sandy soil with snap bean on phosphorus distribution. This experiment was carried out in soils and water research department farm, nuclear research center, atomic energy authority, cairo, Egypt. Snap bean was cultivated in sandy soil and irrigated with 50,37.5 and 25 cm water in three water treatments represented 100, 75 and 50% ETc. Phosphorus distribution and direction of soil water movement had been detected in three sites on the dripper line (S1,S2 and S3 at 0,12.5 and 25 cm distance from dripper). Phosphorus fertilizer (super phosphate, 15.5% P 2 O 5 in rate 300 kg/fed)was added before cultivation. Neutron probe was used to detect the water distribution and movement at the three site along soil profile. Soil samples were collected before p-addition, at end developing, mid, and late growth stages to determine residual available phosphorus. The obtained data showed that using 50 cm water for irrigation caused an increase in P-concentration till 75 cm depth in the three sites of 100% etc treatment, and covered P-requirements of snap bean for all growth stages. As for 37.5 and 25 cm irrigation water cannot cover all growth stages for P-requirements of snap bean. It could be concluded that applied irrigation water could drive the residual P-levels till 75 cm depth in the three sites. Yield of the crop had been taken as an indicator as an indicator profile. Yield showed good response according to water quantities and P-transportation within the soil profile
4. Quantifying human mobility perturbation and resilience in Hurricane Sandy.
Directory of Open Access Journals (Sweden)
Qi Wang
Full Text Available Human mobility is influenced by environmental change and natural disasters. Researchers have used trip distance distribution, radius of gyration of movements, and individuals' visited locations to understand and capture human mobility patterns and trajectories. However, our knowledge of human movements during natural disasters is limited owing to both a lack of empirical data and the low precision of available data. Here, we studied human mobility using high-resolution movement data from individuals in New York City during and for several days after Hurricane Sandy in 2012. We found the human movements followed truncated power-law distributions during and after Hurricane Sandy, although the β value was noticeably larger during the first 24 hours after the storm struck. Also, we examined two parameters: the center of mass and the radius of gyration of each individual's movements. We found that their values during perturbation states and steady states are highly correlated, suggesting human mobility data obtained in steady states can possibly predict the perturbation state. Our results demonstrate that human movement trajectories experienced significant perturbations during hurricanes, but also exhibited high resilience. We expect the study will stimulate future research on the perturbation and inherent resilience of human mobility under the influence of hurricanes. For example, mobility patterns in coastal urban areas could be examined as hurricanes approach, gain or dissipate in strength, and as the path of the storm changes. Understanding nuances of human mobility under the influence of such disasters will enable more effective evacuation, emergency response planning and development of strategies and policies to reduce fatality, injury, and economic loss.
5. Depositional dynamics in the El'gygytgyn Crater margin: implications for the 3.6 Ma old sediment archive
Directory of Open Access Journals (Sweden)
G. Schwamborn
2012-11-01
Full Text Available The combination of permafrost history and dynamics, lake level changes and the tectonical framework is considered to play a crucial role for sediment delivery to El'gygytgyn Crater Lake, NE Russian Arctic. The purpose of this study is to propose a depositional framework based on analyses of the core strata from the lake margin and historical reconstructions from various studies at the site. A sedimentological program has been conducted using frozen core samples from the 141.5 m long El'gygytgyn 5011-3 permafrost well. The drill site is located in sedimentary permafrost west of the lake that partly fills the El'gygytgyn Crater. The total core sequence is interpreted as strata building up a progradational alluvial fan delta. Four macroscopically distinct sedimentary units are identified. Unit 1 (141.5–117.0 m is comprised of ice-cemented, matrix-supported sandy gravel and intercalated sandy layers. Sandy layers represent sediments which rained out as particles in the deeper part of the water column under highly energetic conditions. Unit 2 (117.0–24.25 m is dominated by ice-cemented, matrix-supported sandy gravel with individual gravel layers. Most of the Unit 2 diamicton is understood to result from alluvial wash and subsequent gravitational sliding of coarse-grained (sandy gravel material on the basin slope. Unit 3 (24.25–8.5 m has ice-cemented, matrix-supported sandy gravel that is interrupted by sand beds. These sandy beds are associated with flooding events and represent near-shore sandy shoals. Unit 4 (8.5–0.0 m is ice-cemented, matrix-supported sandy gravel with varying ice content, mostly higher than below. It consists of slope material and creek fill deposits. The uppermost metre is the active layer (i.e. the top layer of soil with seasonal freeze and thaw into which modern soil organic matter has been incorporated. The nature of the progradational sediment transport taking place from the western and northern crater margins may be
6. 2012 USACE Topobathy Lidar: Post Sandy (NJ & NY)
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — These files contain classified topographic and bathymetric lidar data as unclassified valid topographic data (1) and valid topographic data classified as ground (2),...
7. 2012 USACE Post Sandy Topographic LiDAR: Coastal Connecticut
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — This data has been acquired and developed by the U.S. Corps of Engineers ST. Louis District to collect and deliver topographic elevation point data derived from...
8. 78 FR 7780 - Sunshine Act Meeting; FCC Announces Further Details for the First Post-Superstorm Sandy Field...
Science.gov (United States)
2013-02-04
... First Post-Superstorm Sandy Field Hearing, Tuesday, February 5, 2013 AGENCY: Federal Communications Commission. ACTION: Sunshine notice. SUMMARY: In the wake of Superstorm Sandy, Federal Communications... focusing on the impact of Superstorm Sandy, and help inform recommendations and actions to strengthen wired...
9. Vertical distribution of 55Fe in the ocean
International Nuclear Information System (INIS)
Jennings, C.D.
1976-01-01
The highest concentrations of 55 Fe in the ocean are found in the epipelagic and mesopelagic zones with only low concentrations occurring in benthic animals and sediments. 55 Fe in the sediment appears in a very thin surface layer in the equatorial Pacific so that great care in sampling must be exercised to ensure accurate measurement
10. On the ecogeomorphological feedbacks that control tidal channel network evolution in a sandy mangrove setting
Science.gov (United States)
van Maanen, B.; Coco, G.; Bryan, K. R.
2015-01-01
An ecomorphodynamic model was developed to study how Avicennia marina mangroves influence channel network evolution in sandy tidal embayments. The model accounts for the effects of mangrove trees on tidal flow patterns and sediment dynamics. Mangrove growth is in turn controlled by hydrodynamic conditions. The presence of mangroves was found to enhance the initiation and branching of tidal channels, partly because the extra flow resistance in mangrove forests favours flow concentration, and thus sediment erosion in between vegetated areas. The enhanced branching of channels is also the result of a vegetation-induced increase in erosion threshold. On the other hand, this reduction in bed erodibility, together with the soil expansion driven by organic matter production, reduces the landward expansion of channels. The ongoing accretion in mangrove forests ultimately drives a reduction in tidal prism and an overall retreat of the channel network. During sea-level rise, mangroves can potentially enhance the ability of the soil surface to maintain an elevation within the upper portion of the intertidal zone, while hindering both the branching and headward erosion of the landward expanding channels. The modelling results presented here indicate the critical control exerted by ecogeomorphological interactions in driving landscape evolution. PMID:26339195
11. Turning the tide: effects of river inflow and tidal amplitude on sandy estuaries in laboratory landscape experiments
Science.gov (United States)
Kleinhans, Maarten; Braat, Lisanne; Leuven, Jasper; Baar, Anne; van der Vegt, Maarten; van Maarseveen, Marcel; Markies, Henk; Roosendaal, Chris; van Eijk, Arjan
2016-04-01
Many estuaries formed over the Holocene through a combination of fluvial and coastal influxes, but how estuary planform shape and size depend on tides, wave climate and river influxes remains unclear. Here we use a novel tidal flume setup of 20 m length by 3 m width, the Metronome (http://www.uu.nl/metronome), to create estuaries and explore a parameter space for the simple initial condition of a straight river in sandy substrate. Tidal currents capable of transporting sediment in both the ebb and flood phase because they are caused by periodic tilting of the flume rather than the classic method of water level fluctuation. Particle imaging velocimetry and a 1D shallow flow model demonstrate that this principle leads to similar sediment mobility as in nature. Ten landscape experiments recorded by timelapse overhead imaging and AGIsoft DEMs of the final bed elevation show that absence of river inflow leads to short tidal basins whereas even a minor discharge leads to long convergent estuaries. Estuary width and length as well as morphological time scale over thousands of tidal cycles strongly depend on tidal current amplitude. Paddle-generated waves subdue the ebb delta causing stronger tidal currents in the basin. Bar length-width ratios in estuaries are slightly larger to those in braided rivers in experiments and nature. Mutually evasive ebb- and flood-dominated channels are ubiquitous and appear to be formed by an instability mechanism with growing bar and bifurcation asymmetry. Future experiments will include mud flats and live vegetation.
12. Vertical distribution of meiofauna on reflective sandy beaches
Directory of Open Access Journals (Sweden)
Mariana de Oliveira Martins
2015-12-01
Full Text Available Abstract Extreme physical conditions usually limit the meiofauna occurrence and distribution in highly hydrodynamic environments such as reflective beaches. Despite sediment grains of the upper layers being constantly resuspended and deposited, the high energy of the swash zone besides depositing coarse sediments allows an ample vertical distribution of meiofaunal organisms. The effect of physical, chemical and sediment variables on the vertical distribution of meiofaunal organims and nematodes was analysed on two reflective exposed beaches. Sampling was conducted at three sampling points on each beach in the swash zone. The sediment collected was divided into four 10-cm strata (0-10 cm, 10-20 cm, 20-30 cm, 30-40 cm. The statistical differences between strata due to factors previously established (i.e. meiofaunal composition, density of most abundant taxa were tested using a hierarchical PERMANOVA applied under similarity and euclidian distances. An inverse relation among average grain size, content of organic matter and sediment sorting was evident. Coarser sediment characterized the upper layers, while at deeper layers the sediment was very poorly sorted and presented a higher content of organic matter. A similar pattern in the vertical distribution of meiofaunal and nematofaunal composition and density was detected. The lowest densities were associated with the first stratum (0-10 cm, highly affected by hydrodynamics. The vertical distribution of organisms was statistically different only when the interaction among factors was considered. This result suggests that zonation and vertical distribution of meiofaunal organisms are determined by the within-beach variability.
13. Biostratigraphic analysis of the top layer of sediment cores from the reference and test sites of the INDEX area
Digital Repository Service at National Institute of Oceanography (India)
Gupta, S.M.
Radiolarian fossil study in the sediment cores collected during the pre- and postdisturbance cruises of the Environmental Impact Assessment (EIA) Indian Ocean Experiment (INDEX) program of deep sea mining in the Central Indian Ocean Basin suggests a...
14. Regional Variations of REE Patterns in Sediments from Active Plate Boundaries
DEFF Research Database (Denmark)
Kunzendorf, H.; Stoffers, P.; Gwozdz, R.
1988-01-01
About 150 sediment samples from mid-ocean ridges (East Pacific Rise, Central Indian Ocean Ridge, Carlsberg Ridge and the Red Sea) and from a back-arc spreading environment (Lau Basin) were analyzed by instrumental neutron activation. A ratio method for rare-earth elements involving a plot...... of elemental ratios of Ce/La and Ce/Yb is proposed to characterize marine sediments. In the characterization plot East Pacific Rise and Lau Basin sediments occupy distinct fields in the plot suggesting hydrothermal overprint, while sediments from the Central Indian Ocean and the Carlsberg Ridge plot...
15. Ocean Acidification
Science.gov (United States)
Ludwig, Claudia; Orellana, Mónica V.; DeVault, Megan; Simon, Zac; Baliga, Nitin
2015-01-01
The curriculum module described in this article addresses the global issue of ocean acidification (OA) (Feely 2009; Figure 1). OA is a harmful consequence of excess carbon dioxide (CO[subscript 2]) in the atmosphere and poses a threat to marine life, both algae and animal. This module seeks to teach and help students master the cross-disciplinary…
16. Benthic faunal sampling adjacent to the Barbers Point ocean outfall, Oahu, Hawaii, 1986-2010 (NODC Accession 9900098)
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — Benthic fauna in the vicinity of the Barbers Point (Honouliuli) ocean outfall were sampled from 1986-2010. To assess the environmental quality, sediment grain size...
17. Ocean energies
International Nuclear Information System (INIS)
Charlier, R.H.; Justus, J.R.
1993-01-01
This timely volume provides a comprehensive review of current technology for all ocean energies. It opens with an analysis of ocean thermal energy conversion (OTEC), with and without the use of an intermediate fluid. The historical and economic background is reviewed, and the geographical areas in which this energy could be utilized are pinpointed. The production of hydrogen as a side product, and environmental consequences of OTEC plants are considered. The competitiveness of OTEC with conventional sources of energy is analysed. Optimisation, current research and development potential are also examined. Separate chapters provide a detailed examination of other ocean energy sources. The possible harnessing of solar ponds, ocean currents, and power derived from salinity differences is considered. There is a fascinating study of marine winds, and the question of using the ocean tides as a source of energy is examined, focussing on a number of tidal power plant projects, including data gathered from China, Australia, Great Britain, Korea and the USSR. Wave energy extraction has excited recent interest and activity, with a number of experimental pilot plants being built in northern Europe. This topic is discussed at length in view of its greater chance of implementation. Finally, geothermal and biomass energy are considered, and an assessment of their future is given. The authors also distinguished between energy schemes which might be valuable in less-industrialized regions of the world, but uneconomical in the developed countries. A large number of illustrations support the text. This book will be of particular interest to energy economists, engineers, geologists and oceanographers, and to environmentalists and environmental engineers
18. Prediction of bedload sediment transport for heterogeneous sediments in shape
Science.gov (United States)
Durafour, Marine; Jarno, Armelle; Le Bot, Sophie; Lafite, Robert; Marin, François
2015-04-01
Key words: Particle shape, in-situ measurements, bedload transport, heterogeneous sediments Bedload sediment transport in the coastal area is a dynamic process mainly influenced by the type of hydrodynamic forcings involved (current and/or waves), the flow properties (velocity, viscosity, depth) and sediment heterogeneity (particle size, density, shape). Although particle shape is recognized to be a significant factor in the hydrodynamic behavior of grains, this parameter is not currently implemented in bedload transport formulations: firstly because the mechanisms of initiation of motion according to particle shape are still not fully understood, and secondly due to the difficulties in defining common shape parameters. In March 2011, a large panel of in-situ instruments was deployed on two sites in the Eastern English Channel, during the sea campaign MESFLUX11. Samples of the sediment cover available for transport are collected, during a slack period, per 2cm thick strata by divers and by using a Shipeck grab. Bedload discharges along a tidal cycle are also collected with a Delft Nile Sampler (DNS; Gaweesh and Van Rijn, 1992, 1994) on both sites. The first one is characterized by a sandy bed with a low size dispersion, while the other study area implies graded sediments from fine sands to granules. A detailed analysis of the data is performed to follow the evolution of in-situ bedload fluxes on the seabed for a single current. In-situ measurements are compared to existing formulations according to a single fraction approach, using the median diameter of the mixture, and a fractionwise approach, involving a discretization of the grading curve. Results emphasize the interest to oscillate between these two methods according to the dispersion in size of the site considered. The need to apply a hiding/exposure coefficient (Egiazaroff, 1965) and a hindrance factor (Kleinhans and Van Rijn, 2002) for size heterogeneous sediments is also clearly highlighted. A really good
19. The Life of a Sponge in a Sandy Lagoon.
Science.gov (United States)
Ilan, M; Abelson, A
1995-12-01
Infaunal soft-bottom invertebrates benefit from the presence of sediment, but sedimentation is potentially harmful for hard-bottom dwellers. Most sponges live on hard bottom, but on coral reefs in the Red Sea, the species Biemna ehrenbergi (Keller, 1889) is found exclusively in soft-bottom lagoons, usually in the shallowest part. This location is a sink environment, which increases the deposition of particulate organic matter. Most of the sponge body is covered by sediment, but the chimney-like siphons protrude from the sediment surface. The sponge is attached to the buried beach-rock, which reduces the risk of dislodgment during storms. Dye injected above and into the sediment revealed, for the first time, a sponge pumping interstitial water (rich with particles and nutrients) into its aquiferous system. Visual examination of plastic replicas of the aquiferous system and electron microscopical analysis of sponge tissue revealed that the transcellular ostia are mostly located on the buried surface of the sponge. The oscula, however, are located on top of the siphons; their elevated position and their ability to close combine to prevent the filtering system outflow from clogging. The transcellular ostia presumably remain open due to cellular mobility. The sponge maintains a large population of bacteriocytes, which contains bacteria of several different species. Some of these bacteria disintegrate, and may be consumed by the sponge.
20. Gas hydrate dissociation prolongs acidification of the Anthropocene oceans
NARCIS (Netherlands)
Boudreau, B.P.; Luo, Yiming; Meysman, Filip J R; Middelburg, J.J.; Dickens, G.R.
2015-01-01
Anthropogenic warming of the oceans can release methane (CH4) currently stored in sediments as gas hydrates. This CH4 will be oxidized to CO2, thus increasing the acidification of the oceans. We employ a biogeochemical model of the multimillennial carbon cycle to determine the evolution of the
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6207705736160278, "perplexity": 9726.260661821165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592654.99/warc/CC-MAIN-20180721184238-20180721204238-00458.warc.gz"}
|
https://numberwarrior.wordpress.com/2010/04/28/systems-of-equations-via-playing-football/
|
## Systems of Equations via Playing Football
(International note: Before my visitors from across either pond skip ahead in their reader, note this lesson is quite doable with any sort of thrown ball.)
This falls into my grab bag of “physical challenge” lessons, where the students are tasked some act to perform which could be done normally but mathematics makes much easier.
CHALLENGE: Throw a football such that a receiver catches it without slowing down.
The exact conditions depend on if you’re doing the beginner or advanced variant. Here’s beginner:
(International note #2: Tweak for metric. Change “football” and “quarterback” to the ball and name-for-person-who-throws of your choice.)
After setting up the challenge (and nominating quarterbacks and receivers) toss ideas around until the students realize they’ll need the speed of each quarterback’s throw and each receiver’s run.
Outside trip #1: Take everyone outside and lay out a distance, say 20 yards. Have the quarterbacks take turns throwing and get the rest of the class to time the balls from release to catch. (To keep everyone busy, I have students in pairs where one student is timing and the other is writing stuff down. I have them use their cell phones for the time, but if your school policy does not allow this I recommend finding the track coach and borrowing some stopwatches.) Also have each receiver take turns running the same distance (with those times tracked as well).
If you’re feeling punchy, have your students attempt the stunt a couple times. If your students are like mine they can do it but only if the receiver is allowed to slow down. Emphasize the fact they just got tackled.
Back to inside: Students should have enough to work out the necessary speeds. Then leading by discovery or whatever path you desire, they need to realize the formula distance = speed * time (d=st) is applicable here, specifically two equations:
distance of catch = speed of quarterback * air time of ball
distance of catch = speed of receiver * air time of ball + distance receiver is away when ball is thrown
that is
$d = s_1 \cdot t$
$d = s_2 \cdot t + 10$
where d and t ought to be the same for the receiver to make the perfect catch, and $s_1$ and $s_2$ vary depending on the quarterback / receiver match. (I have them work out the calculation for every possible pairing, which requires a moment of combinatorics on their part.)
Solving this system will lead to various answers for d and t; make sure the students write down all of them.
Outside trip #2: Students use what they have learned to attempt the stunt. Bring a cone so the students can mark where the quarterback should be throwing on a given trial. Attempt each pairing multiple times, having the non-athletes timing again (to check against the times they solved for).
Back to inside #2: Since this was more or less a science experiment, students should write up conclusions as well as answer questions like “how could our setup be improved?” or “what other sports situations might mathematics help us with?”
This was the original. This time the students solve for the angle the quarterback needs to throw at. I created this lesson to teach parametric equations. It’s good for trigonometry or pre-calculus.
Does it work? Yes.
Really? What about variability of throws and running? You need to tell your quarterbacks and receivers to be as consistent as possible. They’re usually a little off, but it’s amazing how close what they do matches what the math says they should do.
What about wind? It’s a bummer (unless you are going advanced all the way to vectors). I use a walkway that is outdoors but shielded from wind. Even without shielding as long as the wind is relatively light you should be ok.
Could this be used to help actual football players? Yes.
### 8 Responses
1. Not sure if this is related, but this article reminded me of this situation:
My friend and I are at the beach, my friend is at the water’s edge and I am on the boardwalk some distance away. The boardwalk is parallel to the water’s edge, and I can walk faster on the boardwalk than on the sand. At what point do I leave the boardwalk to reach my friend in the shortest amount of time? In other words, at what point do I cut the corner? or what if my friend is additionally walking along the water’s edge?
2. I’m intrigued! Now is this throw a straight line, or would it follow a parabolic curve (or something similar)? How does that factor into the system of equations?
• The parabola affects the speed of the ball, but in my runs of this lesson the students have managed to throw it consistently enough that the effect can safely be ignored.
If a student lobs it really high or really low at random, they aren’t a good choice for quarterback anyway (if nobody is a good choice, use an easier-to-handle ball).
3. 1)I wonder if a shuttlecock like that used in Badminton would be more consistent in speed? Doesn’t the design exponentially slow down the birdie if hit at too high a speed?
2)I would never use the first diagram in a classroom. My first reaction was not to see that as a diagram of throwing a football, and my mind isn’t even that far in the gutter. The students would surely catch on even quicker.
4. I’ve been looking for a way to incorporate projectile motion with linear motion and your post inspired this
quarterback applet.
5. […] Algebra 3 System of Equations Problem Idea May 30, 2010 I just found this. It might be neat to try, especially if I end up with some football players in my […]
6. […] – What an interesting mix of riddles, observations and sharing! Watch the football-based equations, Q*Bert binomial theorem videos, and the math notation examples for inspiration. The blog […]
7. […] what hurdles still need to be leaped before we get to the Stuff My Freshmen Actually Care About (the football lesson is still one of my most popular activities […]
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5362352132797241, "perplexity": 1537.0789541298682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585281.35/warc/CC-MAIN-20211019202148-20211019232148-00505.warc.gz"}
|
http://math.stackexchange.com/questions/386536/solution-gives-wrong-answer-to-probability-problem
|
# Solution gives wrong answer to probability problem
Great Northern Airlines flies small planes in northern Canada and Alaska. Their largest plane can seat 16 passengers seated in 8 rows of 2. On a certain flight flown on this plane, they have 12 passengers and one large piece of equipment to transport. The equipment is so large that it requires two seats in the same row. The computer randomly assigns seats to the 12 passengers (no 2 passengers will have the same seat). What is the probability that there are two seats in the same row available for the equipment?
This problem was posed on Brilliant last week, and now that the official solution is posted, I would like to know why my solution gives wrong result. Here is my solution:
The number of ways we can seat 12 (identical) people on 16 seats is $16\choose 12$. Now the equipment can occupy any of the rows (8 possibilities), and the 12 people must be seated on the remaining 14 seats ($14 \choose12$ ways). Thus the desired probability is $$\frac{8 {14\choose12}}{16\choose12}=\frac25$$ The official solution gives $\frac5{13}$, but I don't understand what is wrong with mine.
-
Note that the link you provided is unique to you. You need to use the "Share this problem" link for others to view the problem properly. – Calvin Lin May 10 '13 at 0:55
@CalvinLin Oh, I'll keep that in mind. – Dave May 10 '13 at 12:02
Think of it this way - you can place four seats, with the restriction that at least one pair must be together in a row. So subtract off the number of ways you can have no pairs from the total number of combinations $${16\choose4}-{8\choose4}\cdot2^4 = 700$$ as each of the four rows containing an empty seat can have it in either of the two seats. So the probability is $$\frac{700}{1820}=\frac{5}{13}$$ (note that this means that you were double-counting 28 combinations... which makes sense, as $28 = {8\choose2}$, and that's the number of ways you can have two pairs of empty seats)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6797055602073669, "perplexity": 302.46270414122216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207932596.84/warc/CC-MAIN-20150521113212-00157-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1282
|
## Continuous Location of Dimensional Structures
• A natural extension of point facility location problems are those problems in which facilities are extensive, i.e. those that can not be represented by isolated points but as some dimensional structures such as straight lines, segments of lines, polygonal curves or circles. In this paper a review of the existing work on the location of extensive facilities in continuous spaces is given. Gaps in the knowledge are identified and suggestions for further research are made.
$Rev: 13581$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5623980760574341, "perplexity": 532.0235558034525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647003.0/warc/CC-MAIN-20180319155754-20180319175754-00358.warc.gz"}
|
http://mathhelpforum.com/advanced-statistics/198319-conditional-entropy-function.html
|
Math Help - conditional entropy of function
1. conditional entropy of function
how does one show that H(Y l f(X) ) greater than or equal to H(YlX) where f(X) is any function of X?
2. Re: conditional entropy of function
$f(X)$ is a degraded observation of $X$, so we have that
$H(Y|f(X),X) = H(Y|X)$
Since conditioning reduces entropy,
$H(Y|f(X)) \geq H(Y|X)$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9891863465309143, "perplexity": 2031.2442362602362}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096991.38/warc/CC-MAIN-20150627031816-00064-ip-10-179-60-89.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/math-topics/31961-12u-physics-motion-dynamics-please-help.html
|
# Math Help - 12U Physics - Motion and Dynamics! Please Help!
1. ## 12U Physics - Motion and Dynamics! Please Help!
This is an assignment from my physics class that counts for 1/3 of my midterm mark. It's suppose to be basic ... I think .. but I just don't know where to start.
Here's the question Illustrated
http://i284.photobucket.com/albums/l...8/untitled.jpg
A pulley device is used to hurl projectiles from a ramp (kinetic friction = 0.26) The 5.0 kg mass is accelerated from rest at bottom of 4.0m long ramp by a falling 20.0kg mass suspended over frictionless pulley. Just as the 5.0 kg mass reached the top of the ramp it detached from the rope (neglecting mass of the rope) and becomes projected off the ramp. Determine the horizontal range of the 5.0 kg mass from the base of the ramp.
Anything will help, even guesses.
2. Looking at the 5.0 kg block, the net force acting on it along the incline includes friction, gravity ( $mgsin\theta$), and the tension of the rope. Each of these can be calculated and thus, you can find the acceleration of the block ( $F_{net} = ma$).
Now, use your kinematics equations to determine what it's velocity is by the time it reaches the top of the ramp (you're given that it's 4.0 m) and note that it will be 'launched' at a 30 degree angle. You should be able to determine how far it will land with all this in mind. Keep in mind that once the 5.0kg is released, it will not be accelerating (at least, that is what is presumed).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7835947275161743, "perplexity": 554.5371609430655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701152130.53/warc/CC-MAIN-20160205193912-00105-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://www.imlearningmath.com/in-scrabble-which-of-these-letters-is-worth-five-points/
|
# In Scrabble, which of these letters is worth five points?
In Scrabble, which of these letters is worth five points?
W
J
K
Q
The Answer: The correct answer is K.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9182052612304688, "perplexity": 4170.151309895098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585201.94/warc/CC-MAIN-20211018093606-20211018123606-00203.warc.gz"}
|
https://www.mapsofworld.com/where-is/turkoglu.html
|
Maps of World
Current, Credible, Consistent
Search
Select Country Flag
World Map / Where Is / Where is Turkoglu
# Where is Turkoglu
Location Maps of Cities in Turkey
Last Updated : July 22, 2016
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8175262808799744, "perplexity": 28003.476877058612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687702.11/warc/CC-MAIN-20170921063415-20170921083415-00350.warc.gz"}
|
http://projecteuler.net/problem=323
|
## Bitwise-OR operations on random integers
### Problem 323
Published on Sunday, 6th February 2011, 07:00 am; Solved by 1571
Let y0, y1, y2,... be a sequence of random unsigned 32 bit integers
(i.e. 0 yi 232, every value equally likely).
For the sequence xi the following recursion is given:
• x0 = 0 and
• xi = xi-1 | yi-1, for i 0. ( | is the bitwise-OR operator)
It can be seen that eventually there will be an index N such that xi = 232 -1 (a bit-pattern of all ones) for all i N.
Find the expected value of N.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9433654546737671, "perplexity": 2385.307215511516}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://www.darwinproject.ac.uk/letter/DCP-LETT-3913.xml
|
# To J. D. Hooker 13 January [1863]
Down
Jan 13th
My dear Hooker
I send very imperfect answer to question, & which I have written on foreign paper, to save you copying & you can send when you write to Thomson in Calcutta.—1 Hereafter I shall be able to answer better your question about qualities induced in individual being inherited:2 gout in man,—loss of wool in sheep (which begins in 1st generation & takes 2 or 3 to complete) probably obesity (for it is rare with poor); probably obesity & early maturity in Short-horn Cattle, &c.—
I am very glad you like Huxley’s Lectures;3 I have been very much struck with them; especially with the philosophy of induction.—4 I have quarrelled with him with overdoing sterility & ignoring cases from Gärtner & Kölreuter about sterile varieties.5 His geology is obscure;6 & I rather doubt about man’s mind & language.—7 But it seems to me admirably done, & as you say “oh my” about the praise of the Origin:8 I can’t help liking it, which makes me rather ashamed of myself.—
I enclose Asa Gray;9 only last page & $\frac{1}{2}$ will interest you; but look at red (?) & rewrite names.10 Do not allude to Gray that you have seen this letter, as he might not like it, as he speaks of your being wrong (& converted, alas not so!) about Crossing.11 The sentence about Strawberries made me look at Bentham, & I have enclosed remark for him;12 I can assure him his remark would make any good horticulturist’s hair stand on end.13 It is marvellous to see Asa Gray so cock-sure about the doom of Slavery.—14
You wrote me a famous long letter a few days ago: Emma is going to read De TocVille & so was glad to hear your remarks.—15 I am glad to hear that you are going to do some work which will bring a little grist to the mill; but good Heavens how do you find time with Genera Plantarum, official work, friends, & Heaven knows what!16
Many thanks about Poison for Plants.—17 I know nothing about leaf-insects, except that they are carnivorous.— Andrew Murray knows.—18
You ask what I think about Falconer;19 of course I am much pleased at the very kind way he refers to me;20 but, as I look at it, the great gain is for any good man to give up immutability of species: the road is then open for progress; it is comparatively immaterial whether he believes in N. Selection; but how any man can persuade himself that species change unless he sees how they become adapted to their conditions is to me incomprehensible.—21 I do not see force of Falconer’s remarks about spire of shells, Phyllotaxis, &c:22 I suppose he did not look at my chapter on what I call laws of variation.—23
How very well Falconer writes: by the way in one of your letters you insisted on importance of style;24 I have just been struck with excellent instance in Alex. Braun on Rejuvenescence in Ray Soc 1853; I have tried & literally I cannot read it.25 Have you read it?
I have just received long pamphet by Alph. De Candolle on Oaks & allies,26 in which he has worked out in very complete & curious manner individual variability of species, & has wildish speculations on their migrations & duration &c.27 It is really curious to see how blind he is to the conditions or struggle for life; he attributes the presence of all species of all genera of trees to dryness or dampness! At end he has discussion on “Origin”;28 I have not yet come to this, but suppose it will be dead against it. Should you like to see this pamphlet?
My hot-house will begin building in a week or so,29 & I am looking with much pleasure at catalogues to see what plants to get: I shall keep to curious & experimental plants. I see I can buy Pitcher plants for only 10s .6!30 But the job is whether we shall be able to manage them. I shall get Sarracenia Dichœa your Hedysarum, Mimosa & all such funny things,, as far as I can without great expence.31 I daresay I shall beg for loan of some few orchids; especially for Acropera Loddigesii.32 I fancy orchids cost awful sums; but I must get priced catalogue. I can see hardly any Melastomas in catalogues.—33
I had a whole Box of small Wedgwood medallions; but drat the children everything in this house gets lost & wasted; I can find only about a dozen little things as big as shillings, & I presume worth nothing; but you shall look at them when here & take them if worth pocketing.34
You sent us a gratuitous insult about the “chimney-pots” in dining room, for you shan’t have them; nor are they Wedgwood ware.—35
Remember Naudin36
When you return you must remember my list of experimental seeds.—37 I hope you will enjoy yourself38
Goodnight my dear old friend | C. Darwin
You have not lately mentioned Mrs. Hooker: remember us most kindly to her.—39
## Footnotes
The reference is to the surgeon and botanist Thomas Thomson; Thomson lived in Calcutta only until 1860 or 1861 (DNB). The enclosure has not been found. See also letter from J. D. Hooker, [12 January 1863] and n. 2.
See letter from J. D. Hooker, [12 January 1863]. On 23 January 1863, CD began writing up his ‘Chapter on Inheritance’ for Variation, eventually published as chapters 12–14 (Variation 2: 1–84; see ‘Journal’ (Correspondence vol. 11, Appendix II)).
Thomas Henry Huxley presented an evening lecture series for working men at the Museum of Practical Geology in London during November and December 1862; the lectures were published as T. H. Huxley 1863a. See letter to T. H. Huxley, 10 [January 1863], and letter from J. D. Hooker, [12 January 1862].
T. H. Huxley 1863a, pp. 55–67. Huxley’s discussion of induction formed part of the third lecture, delivered on 24 November 1862 (‘The method by which the causes of the present and past conditions of organic nature are to be discovered.— The origination of living beings’). There is a lightly annotated copy of T. H. Huxley 1863a in the Darwin Library–CUL (see Marginalia 1: 425).
Huxley argued that the origin of species through natural selection could not be proven until artificial selection produced from a common stock varieties that were sterile with one another (T. H. Huxley 1863a, pp. 146–50). CD, by contrast, was impressed by the plant hybridisation experiments conducted by Karl Friedrich von Gärtner and Joseph Gottlieb Kölreuter (Origin, pp. 246–9, 257–9, 270–4; Gärtner 1844 and 1849; Kölreuter 1761–6). See letter to T. H. Huxley, 10 [January 1863], and Correspondence vol. 10, Appendix VI.
T. H. Huxley 1863a, pp. 29–52. CD refers particularly to pages 39–41, and to figure 5 on page 40, which he thought would be confusing to a non-geologist. See letters to T. H. Huxley, 7 December [1862] and n. 7, and 18 December [1862] (Correspondence vol. 10).
T. H. Huxley 1863a, pp. 153–6. While arguing that ‘man differs to no greater extent from the animals which are immediately below him than these do from other members of the same order’, Huxley wrote that it was largely the power of language that distinguished man ‘from the whole of the brute world’ (ibid., pp. 154–5).
See letter from J. D. Hooker, [12 January 1863], and letter to T. H. Huxley, 10 [January 1863] and n. 4.
In Asa Gray’s letter, CD marked some of the plant names with marginal crosses in red crayon, and Hooker clearly printed the names ‘Abronia’, ‘Nyctaginia’, ‘Pavonia’ for Pavonia hastata, and ‘Ruellia’. These were plants in which the plants flowering earlier in the season were pollinated in the bud (see Correspondence vol. 10, letter from Asa Gray, 29 December 1862).
In November and December 1862, CD and Hooker debated the effects of crossing on variation, with Hooker maintaining that self-fertilisation did not favour variation, ‘whereas crossing tends to variation by adding differences’ (see Correspondence vol. 10, letter from J. D. Hooker, 26 November 1862). CD agreed with Gray (A. Gray 1862d, p. 420) that: free cross-breeding of incipient varieties inter se and with their original types is just the way to blend all together, to repress all salient characteristics as fast as the mysterious process of variation originates them, and fuse the whole into a homogeneous form. See Correspondence vol. 10, letter to Asa Gray, 26[–7] November [1862].
The letter from Asa Gray, 29 December 1862 (Correspondence vol. 10), is incomplete; Gray’s statement concerning strawberries was made in a postscript that has not been located. However, in his account of strawberries in Variation 1: 351–4, CD considered it unlikely that hybrids of European and American strawberries were fertile enough to be worth cultivation. This fact was surprising to him ‘as these forms structurally are not widely distinct, and are sometimes connected in the districts where they grow wild, as I hear from Professor Asa Gray, by puzzling intermediate forms’ (Variation 1: 352). CD probably consulted George Bentham’s Handbook of the British flora (Bentham 1858; see n. 13, below). The enclosure for Bentham has not been found. See also letter to Asa Gray, 2 January [1863] and n. 17.
In his Handbook of the British flora, Bentham wrote that while several wild and cultivated strawberries had been proposed as species, ‘the great facility with which fertile hybrids are produced, gives reason to suspect that the whole genus … may prove to consist but of one species’ (Bentham 1858, pp. 191–2). CD’s annotated copy of Bentham 1858 is in the Rare Books Room–CUL (see Marginalia 1: 51).
The letter from Asa Gray, 29 December 1862 (Correspondence vol. 10), is incomplete; the portion containing Gray’s statement regarding events in the United States has not been found. Gray may have commented on Abraham Lincoln’s emancipation proclamation, which was to come into effect on 1 January 1863; from that time all slaves in territories still in rebellion were to be freed (see Denney 1992, pp. 248, 251).
Emma Darwin. CD refers to Alexis Henri Charles Maurice Clérel, comte de Tocqueville’s Democracy in America (H. Reeve trans. 1862). See Correspondence vol. 10, letter from J. D. Hooker, [21 December 1862], and this volume, letter from J. D. Hooker, 6 January 1863. CD had read Tocqueville’s De la démocratie en Amérique (Tocqueville 1836) in February 1849 (see Correspondence vol. 10, letter to J. D. Hooker, 24 December [1862], and Correspondence vol. 4, Appendix IV, 119: 22b).
Hooker had been commissioned to write a flora of New Zealand (J. D. Hooker 1864–7; see letter from J. D. Hooker, 6 January 1863). At the same time, Hooker was at work on Genera plantarum (Bentham and Hooker 1862–83), and also had official duties in his capacity as assistant director of the Royal Botanic Gardens, Kew.
In his letter to Hooker of 3 January [1863], CD asked for advice about how to prevent mould from growing on his children’s dried flower collections; for Hooker’s reply, see his letter of 6 January 1863. In his Account book–cash accounts (Down House MS), on 16 January 1863, CD recorded a payment of 9s. for ‘Poison for plants’ to the London importers and makers of chemical and photographic apparatus, Bolton & Barnitt of Holborn Bars, London.
Hooker had asked CD what he should feed newly hatched leaf insects from Java (see letter from J. D. Hooker, 6 January 1863). Andrew Murray was a botanist and entomologist with expertise in Coleoptera and insects harmful to crops (DNB).
Hooker had asked CD’s opinion of Falconer 1863a (see letter from J. D. Hooker, 6 January 1863).
Falconer 1863a, pp. 77–81 (see n. 21, below). See also letter to Hugh Falconer, 5 [and 6] January [1863] and n. 7.
In his article on fossil and recent elephants, Hugh Falconer praised CD and his theory of modified descent (Falconer 1863a, pp. 77, 80). At the same time, he argued that natural selection was an inadequate explanation for the origin of species since some species subject to variable conditions over time, such as the mammoths, had remained unchanged (Falconer 1863a, p. 80).
While Falconer conceded that forms like the mammoth and other extinct elephants were ‘modified descendants of earlier progenitors’ (Falconer 1863a, p. 80), he continued to argue against the adequacy of natural selection to explain this modification: The law of Phyllotaxis, which governs the evolution of leaves around the axis of a plant, is nearly as constant in its manifestation, as any of the physical laws connected with the material world. Each instance, however different from another, can be shown to be a term of some series of continued fractions. When this is coupled with the geometrical law governing the evolution of form, so manifest in some departments of the animal kingdom, e. g. the spiral shells of the Mollusca, it is difficult to believe, that there is not in nature, a deeper seated and innate principle, to the operation of which ‘Natural Selection’ is merely an adjunct.
Origin, pp. 131–70.
The reference has not been identified.
Braun 1851. The English title of the article was ‘Reflections on the phenomena of rejuvenescence in nature, especially in the life and development of plants’ (Henfrey trans. 1853). There is an annotated copy of Arthur Henfrey’s translation of Braun 1851 in the Darwin Library–CUL (see Marginalia 1: 366–7).
Alphonse de Candolle sent CD copies of A. de Candolle 1862a and 1862b. See Correspondence vol. 10, letter from Alphonse de Candolle, 18 September 1862; see also following letter. CD’s annotated copies of A. de Candolle 1862a and 1862b are in the Darwin Pamphlet Collection–CUL.
A. de Candolle 1862a, pp. 326–53. See following letter and n. 6.
A. de Candolle 1862a, pp. 354–61, 363. See Intellectual Observer 3 (1863): 81–6, for a translation of the last portion of A. de Candolle 1862b. See also following letter and n. 7.
See DAR 157.1: 111 and 112 for CD’s botanical notes on experiments with Nepenthes (pitcher plants).
CD had experimented on the power of movement in Hedysarum and Mimosa in 1862 (see Correspondence vol. 10).
CD was keen to obtain fresh flowers of Acropera; for CD’s continuing investigation of this orchid genus, see Correspondence vol. 10, letter from John Scott, 11 November 1862, and letter to John Scott, 12 November [1862], and this volume, letter from John Scott, 6 January 1863 and nn. 3 and 4.
Hooker had started to collect Wedgwood ware and was particularly interested in medallions. See Correspondence vol. 10, letter from J. D. Hooker, [27 or 28 December 1862], and this volume, letter to J. D. Hooker, 3 January [1863], and letter from J. D. Hooker, 6 January 1863.
With his letter to Hooker of 24 December [1862] (Correspondence vol. 10), CD enclosed a ‘memorandum of enquiry’ for Charles Victor Naudin, whom Hooker hoped to meet during his forthcoming visit to Paris (see n. 38, below).
In his letter to Hooker of 3 November [1862] (Correspondence vol. 10), CD enclosed a list of the seeds he wanted for experiments on sensitivity in plants. See also ibid., letter to J. D. Hooker, [10–]12 November [1862].
Hooker and Bentham departed for Paris on 17 January 1863 (Jackson 1906, p. 193).
Since the death of her father, John Stevens Henslow, in May 1861, Frances Harriet Hooker had been suffering from depression and ill-health (see Correspondence vols. 9 and 10).
## Bibliography
Bentham, George. 1858. Handbook of the British flora; a description of the flowering plants and ferns indigenous to, or naturalized in, the British Isles. London: Lovell Reeve.
Braun, Alexander Carl Heinrich. 1851. Betrachtungen über die Erscheinung der Verjüngung in der Natur, insbesondere in der Lebens- und Bildungsgeschichte der Pflanze. Leipzig: Wilhelm Engelmann.
Correspondence: The correspondence of Charles Darwin. Edited by Frederick Burkhardt et al. 27 vols to date. Cambridge: Cambridge University Press. 1985–.
Denney, Robert E. 1992. The civil war years: a day-by-day chronicle of the life of a nation. New York: Sterling Publishing.
DNB: Dictionary of national biography. Edited by Leslie Stephen and Sidney Lee. 63 vols. and 2 supplements (6 vols.). London: Smith, Elder & Co. 1885–1912. Dictionary of national biography 1912–90. Edited by H. W. C. Davis et al. 9 vols. London: Oxford University Press. 1927–96.
Gärtner, Karl Friedrich von. 1844. Versuche und Beobachtungen über die Befruchtungsorgane der vollkommeneren Gewächse und über die natürliche und künstliche Befruchtung durch den eigenen Pollen. Pt 1 of Beiträge zur Kenntniss der Befruchtung der vollkommeneren Gewächse. Stuttgart: E. Schweizerbart.
Hooker, Joseph Dalton. 1864–7. Handbook of the New Zealand flora: a systematic description of the native plants of New Zealand and the Chatham, Kermadec’s, Lord Auckland’s, Campbell’s, and MacQuarrie’s Islands. 2 vols. London: Lovell Reeve & Co.
Jackson, Benjamin Daydon. 1906. George Bentham. London: J. M. Dent. New York: E. P. Dutton.
Kölreuter, Joseph Gottlieb. 1761–6. Vorläufige Nachricht von einigen das Geschlecht der Pflanzen betreffenden Versuchen und Beobachtungen. Leipzig: Gleditschischen Handlung.
Marginalia: Charles Darwin’s marginalia. Edited by Mario A. Di Gregorio with the assistance of Nicholas W. Gill. Vol. 1. New York and London: Garland Publishing. 1990.
Origin: On the origin of species by means of natural selection, or the preservation of favoured races in the struggle for life. By Charles Darwin. London: John Murray. 1859.
Tocqueville, Charles Alexis Henri Maurice Clérel de. 1836. De la démocratie en Amérique. 4th edition. 2 vols. in 1. Paris: Charles Gosselin.
Variation: The variation of animals and plants under domestication. By Charles Darwin. 2 vols. London: John Murray. 1868.
## Summary
Acquired characteristics.
Huxley’s lectures: good on induction, bad on sterility, obscure on geology.
Asa Gray on slavery.
Falconer’s partial conversion.
Alphonse de Candolle on Origin.
## Letter details
Letter no.
DCP-LETT-3913
From
Charles Robert Darwin
To
Joseph Dalton Hooker
Sent from
Down
Source of text
DAR 115: 179
Physical description
8pp
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4678819477558136, "perplexity": 6538.969083893093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146940.95/warc/CC-MAIN-20200228012313-20200228042313-00464.warc.gz"}
|
https://infoscience.epfl.ch/record/185913
|
Infoscience
Conference paper
# Mapping Dispersion Fluctuations along Optical Fibers Using Brillouin Probing and a Fast Analytic Calculation
A simple analytic formula is derived to extract tiny dispersion fluctuations along highly nonlinear fibers from distributed measurements of parametric gain. A refined BOTDA scheme, suitable to track Kerr processes, enables low noise measurements.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8316265940666199, "perplexity": 16198.361959189779}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170696.61/warc/CC-MAIN-20170219104610-00286-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://hal.inria.fr/hal-03024618
|
# Robustness of the Young/Daly formula for stochastic iterative applications
1 ROMA - Optimisation des ressources : modèles, algorithmes et ordonnancement
Inria Grenoble - Rhône-Alpes, LIP - Laboratoire de l'Informatique du Parallélisme
3 TADAAM - Topology-Aware System-Scale Data Management for High-Performance Computing
LaBRI - Laboratoire Bordelais de Recherche en Informatique, Inria Bordeaux - Sud-Ouest
Abstract : The Young/Daly formula for periodic checkpointing is known to hold for a divisible load application where one can checkpoint at any time-step. In an nutshell, the optimal period is $P YD = 2µ f C$ where µ f is the Mean Time Between Failures (MTBF) and C is the checkpoint time. This paper assesses the accuracy of the formula for applications decomposed into computational iterations where: (i) the duration of an iteration is stochastic, i.e., obeys a probability distribution law D of mean µ D ; and (ii) one can checkpoint only at the end of an iteration. We first consider static strategies where checkpoints are taken after a given number of iterations k and provide a closed-form, asymptotically optimal, formula for k, valid for any distribution D. We then show that using the Young/Daly formula to compute $k (as k • µ D = P YD)$ is a first order approximation of this formula. We also consider dynamic strategies where one decides to checkpoint at the end of an iteration only if the total amount of work since the last checkpoint exceeds a threshold W th , and otherwise proceed to the next iteration. Similarly, we provide a closed-form formula for this threshold and show that P YD is a first-order approximation of W th. Finally, we provide an extensive set of simulations where D is either Uniform, Gamma or truncated Normal, which shows the global accuracy of the Young/Daly formula, even when the distribution D had a large standard deviation (and when one cannot use a first-order approximation). Hence we establish that the relevance of the formula goes well beyond its original framework.
Keywords :
Document type :
Conference papers
Domain :
https://hal.inria.fr/hal-03024618
Contributor : Equipe Roma <>
Submitted on : Monday, November 30, 2020 - 4:40:15 PM
Last modification on : Thursday, December 3, 2020 - 1:52:14 PM
### File
icpp20-170.pdf
Files produced by the author(s)
### Citation
Yishu Du, Loris Marchal, Yves Robert, Guillaume Pallez. Robustness of the Young/Daly formula for stochastic iterative applications. ICPP 2020 - 49th International Conference on Parallel Processing, Aug 2020, Edmonton / Virtual, Canada. pp.1-11, ⟨10.1145/3404397.3404419⟩. ⟨hal-03024618⟩
Record views
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9355192184448242, "perplexity": 2797.39848751941}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703547475.44/warc/CC-MAIN-20210124075754-20210124105754-00055.warc.gz"}
|
http://terrytao.wordpress.com/tag/equidistribution/
|
You are currently browsing the tag archive for the ‘equidistribution’ tag.
In Notes 5, we saw that the Gowers uniformity norms on vector spaces ${{\bf F}^n}$ in high characteristic were controlled by classical polynomial phases ${e(\phi)}$.
Now we study the analogous situation on cyclic groups ${{\bf Z}/N{\bf Z}}$. Here, there is an unexpected surprise: the polynomial phases (classical or otherwise) are no longer sufficient to control the Gowers norms ${U^{s+1}({\bf Z}/N{\bf Z})}$ once ${s}$ exceeds ${1}$. To resolve this problem, one must enlarge the space of polynomials to a larger class. It turns out that there are at least three closely related options for this class: the local polynomials, the bracket polynomials, and the nilsequences. Each of the three classes has its own strengths and weaknesses, but in my opinion the nilsequences seem to be the most natural class, due to the rich algebraic and dynamical structure coming from the nilpotent Lie group undergirding such sequences. For reasons of space we shall focus primarily on the nilsequence viewpoint here.
Traditionally, nilsequences have been defined in terms of linear orbits ${n \mapsto g^n x}$ on nilmanifolds ${G/\Gamma}$; however, in recent years it has been realised that it is convenient for technical reasons (particularly for the quantitative “single-scale” theory) to generalise this setup to that of polynomial orbits ${n \mapsto g(n) \Gamma}$, and this is the perspective we will take here.
A polynomial phase ${n \mapsto e(\phi(n))}$ on a finite abelian group ${H}$ is formed by starting with a polynomial ${\phi: H \rightarrow {\bf R}/{\bf Z}}$ to the unit circle, and then composing it with the exponential function ${e: {\bf R}/{\bf Z} \rightarrow {\bf C}}$. To create a nilsequence ${n \mapsto F(g(n) \Gamma)}$, we generalise this construction by starting with a polynomial ${g \Gamma: H \rightarrow G/\Gamma}$ into a nilmanifold ${G/\Gamma}$, and then composing this with a Lipschitz function ${F: G/\Gamma \rightarrow {\bf C}}$. (The Lipschitz regularity class is convenient for minor technical reasons, but one could also use other regularity classes here if desired.) These classes of sequences certainly include the polynomial phases, but are somewhat more general; for instance, they almost include bracket polynomial phases such as ${n \mapsto e( \lfloor \alpha n \rfloor \beta n )}$. (The “almost” here is because the relevant functions ${F: G/\Gamma \rightarrow {\bf C}}$ involved are only piecewise Lipschitz rather than Lipschitz, but this is primarily a technical issue and one should view bracket polynomial phases as “morally” being nilsequences.)
In these notes we set out the basic theory for these nilsequences, including their equidistribution theory (which generalises the equidistribution theory of polynomial flows on tori from Notes 1) and show that they are indeed obstructions to the Gowers norm being small. This leads to the inverse conjecture for the Gowers norms that shows that the Gowers norms on cyclic groups are indeed controlled by these sequences.
In the previous lectures, we have focused mostly on the equidistribution or linear patterns on a subset of the integers ${{\bf Z}}$, and in particular on intervals ${[N]}$. The integers are of course a very important domain to study in additive combinatorics; but there are also other fundamental model examples of domains to study. One of these is that of a vector space ${V}$ over a finite field ${{\bf F} = {\bf F}_p}$ of prime order. Such domains are of interest in computer science (particularly when ${p=2}$) and also in number theory; but they also serve as an important simplified “dyadic model” for the integers. See this survey article of Green for further discussion of this point.
The additive combinatorics of the integers ${{\bf Z}}$, and of vector spaces ${V}$ over finite fields, are analogous, but not quite identical. For instance, the analogue of an arithmetic progression in ${{\bf Z}}$ is a subspace of ${V}$. In many cases, the finite field theory is a little bit simpler than the integer theory; for instance, subspaces are closed under addition, whereas arithmetic progressions are only “almost” closed under addition in various senses. (For instance, ${[N]}$ is closed under addition approximately half of the time.) However, there are some ways in which the integers are better behaved. For instance, because the integers can be generated by a single generator, a homomorphism from ${{\bf Z}}$ to some other group ${G}$ can be described by a single group element ${g}$: ${n \mapsto g^n}$. However, to specify a homomorphism from a vector space ${V}$ to ${G}$ one would need to specify one group element for each dimension of ${V}$. Thus we see that there is a tradeoff when passing from ${{\bf Z}}$ (or ${[N]}$) to a vector space model; one gains a bounded torsion property, at the expense of conceding the bounded generation property. (Of course, if one wants to deal with arbitrarily large domains, one has to concede one or the other; the only additive groups that have both bounded torsion and boundedly many generators, are bounded.)
The starting point for this course (Notes 1) was the study of equidistribution of polynomials ${P: {\bf Z} \rightarrow {\bf R}/{\bf Z}}$ from the integers to the unit circle. We now turn to the parallel theory of equidistribution of polynomials ${P: V \rightarrow {\bf R}/{\bf Z}}$ from vector spaces over finite fields to the unit circle. Actually, for simplicity we will mostly focus on the classical case, when the polynomials in fact take values in the ${p^{th}}$ roots of unity (where ${p}$ is the characteristic of the field ${{\bf F} = {\bf F}_p}$). As it turns out, the non-classical case is also of importance (particularly in low characteristic), but the theory is more difficult; see these notes for some further discussion.
(Linear) Fourier analysis can be viewed as a tool to study an arbitrary function ${f}$ on (say) the integers ${{\bf Z}}$, by looking at how such a function correlates with linear phases such as ${n \mapsto e(\xi n)}$, where ${e(x) := e^{2\pi i x}}$ is the fundamental character, and ${\xi \in {\bf R}}$ is a frequency. These correlations control a number of expressions relating to ${f}$, such as the expected behaviour of ${f}$ on arithmetic progressions ${n, n+r, n+2r}$ of length three.
In this course we will be studying higher-order correlations, such as the correlation of ${f}$ with quadratic phases such as ${n \mapsto e(\xi n^2)}$, as these will control the expected behaviour of ${f}$ on more complex patterns, such as arithmetic progressions ${n, n+r, n+2r, n+3r}$ of length four. In order to do this, we must first understand the behaviour of exponential sums such as
$\displaystyle \sum_{n=1}^N e( \alpha n^2 ).$
Such sums are closely related to the distribution of expressions such as ${\alpha n^2 \hbox{ mod } 1}$ in the unit circle ${{\bf T} := {\bf R}/{\bf Z}}$, as ${n}$ varies from ${1}$ to ${N}$. More generally, one is interested in the distribution of polynomials ${P: {\bf Z}^d \rightarrow {\bf T}}$ of one or more variables taking values in a torus ${{\bf T}}$; for instance, one might be interested in the distribution of the quadruplet ${(\alpha n^2, \alpha (n+r)^2, \alpha(n+2r)^2, \alpha(n+3r)^2)}$ as ${n,r}$ both vary from ${1}$ to ${N}$. Roughly speaking, once we understand these types of distributions, then the general machinery of quadratic Fourier analysis will then allow us to understand the distribution of the quadruplet ${(f(n), f(n+r), f(n+2r), f(n+3r))}$ for more general classes of functions ${f}$; this can lead for instance to an understanding of the distribution of arithmetic progressions of length ${4}$ in the primes, if ${f}$ is somehow related to the primes.
More generally, to find arithmetic progressions such as ${n,n+r,n+2r,n+3r}$ in a set ${A}$, it would suffice to understand the equidistribution of the quadruplet ${(1_A(n), 1_A(n+r), 1_A(n+2r), 1_A(n+3r))}$ in ${\{0,1\}^4}$ as ${n}$ and ${r}$ vary. This is the starting point for the fundamental connection between combinatorics (and more specifically, the task of finding patterns inside sets) and dynamics (and more specifically, the theory of equidistribution and recurrence in measure-preserving dynamical systems, which is a subfield of ergodic theory). This connection was explored in one of my previous classes; it will also be important in this course (particularly as a source of motivation), but the primary focus will be on finitary, and Fourier-based, methods.
The theory of equidistribution of polynomial orbits was developed in the linear case by Dirichlet and Kronecker, and in the polynomial case by Weyl. There are two regimes of interest; the (qualitative) asymptotic regime in which the scale parameter ${N}$ is sent to infinity, and the (quantitative) single-scale regime in which ${N}$ is kept fixed (but large). Traditionally, it is the asymptotic regime which is studied, which connects the subject to other asymptotic fields of mathematics, such as dynamical systems and ergodic theory. However, for many applications (such as the study of the primes), it is the single-scale regime which is of greater importance. The two regimes are not directly equivalent, but are closely related: the single-scale theory can be usually used to derive analogous results in the asymptotic regime, and conversely the arguments in the asymptotic regime can serve as a simplified model to show the way to proceed in the single-scale regime. The analogy between the two can be made tighter by introducing the (qualitative) ultralimit regime, which is formally equivalent to the single-scale regime (except for the fact that explicitly quantitative bounds are abandoned in the ultralimit), but resembles the asymptotic regime quite closely.
We will view the equidistribution theory of polynomial orbits as a special case of Ratner’s theorem, which we will study in more generality later in this course.
For the finitary portion of the course, we will be using asymptotic notation: ${X \ll Y}$, ${Y \gg X}$, or ${X = O(Y)}$ denotes the bound ${|X| \leq CY}$ for some absolute constant ${C}$, and if we need ${C}$ to depend on additional parameters then we will indicate this by subscripts, e.g. ${X \ll_d Y}$ means that ${|X| \leq C_d Y}$ for some ${C_d}$ depending only on ${d}$. In the ultralimit theory we will use an analogue of asymptotic notation, which we will review later in these notes.
Today, Prof. Margulis continued his lecture series, focusing on two specific examples of homogeneous dynamics applications to number theory, namely counting lattice points on algebraic varieties, and quantitative versions of the Oppenheim conjecture. (Due to lack of time, the third application mentioned in the previous lecture, namely metric theory of Diophantine approximation, was not covered.)
The final distinguished lecture series for the academic year here at UCLA is being given this week by Gregory Margulis, who is giving three lectures on “homogeneous dynamics and number theory”. In his first lecture, Prof. Margulis surveyed some classical problems in number theory that turn out, rather surprisingly, to have more or less equivalent counterparts in homogeneous dynamics – the theory of dynamical systems on homogeneous spaces $G/\Gamma$.
As usual, any errors in this post are due to my transcription of the talk.
This week I was in Columbus, Ohio, attending a conference on equidistribution on manifolds. I talked about my recent paper with Ben Green on the quantitative behaviour of polynomial sequences in nilmanifolds, which I have blogged about previously. During my talk (and inspired by the immediately preceding talk of Vitaly Bergelson), I stated explicitly for the first time a generalisation of the van der Corput trick which morally underlies our paper, though it is somewhat buried there as we specialised it to our application at hand (and also had to deal with various quantitative issues that made the presentation more complicated). After the talk, several people asked me for a more precise statement of this trick, so I am presenting it here, and as an application reproving an old theorem of Leon Green that gives a necessary and sufficient condition as to whether a linear sequence $(g^n x)_{n=1}^\infty$ on a nilmanifold $G/\Gamma$ is equidistributed, which generalises the famous theorem of Weyl on equidistribution of polynomials.
UPDATE, Feb 2013: It has been pointed out to me by Pavel Zorin that this argument does not fully recover the theorem of Leon Green; to cover all cases, one needs the more complicated van der Corput argument in our paper.
Ben Green and I have just uploaded our joint paper, “The distribution of polynomials over finite fields, with applications to the Gowers norms“, to the arXiv, and submitted to Contributions to Discrete Mathematics. This paper, which we first announced at the recent FOCS meeting, and then gave an update on two weeks ago on this blog, is now in final form. It is being made available simultaneously with a closely related paper of Lovett, Meshulam, and Samorodnitsky.
In the previous post on this topic, I focused on the negative results in the paper, and in particular the fact that the inverse conjecture for the Gowers norm fails for certain degrees in low characteristic. Today, I’d like to focus instead on the positive results, which assert that for polynomials in many variables over finite fields whose degree is less than the characteristic of the field, one has a satisfactory theory for the distribution of these polynomials. Very roughly speaking, the main technical results are:
• A regularity lemma: Any polynomial can be expressed as a combination of a bounded number of other polynomials which are regular, in the sense that no non-trivial linear combination of these polynomials can be expressed efficiently in terms of lower degree polynomials.
• A counting lemma: A regular collection of polynomials behaves as if the polynomials were selected randomly. In particular, the polynomials are jointly equidistributed.
Ben Green and I have just uploaded our paper “The quantitative behaviour of polynomial orbits on nilmanifolds” to the arXiv (and shortly to be submitted to a journal, once a companion paper is finished). This paper grew out of our efforts to prove the Möbius and Nilsequences conjecture MN(s) from our earlier paper, which has applications to counting various linear patterns in primes (Dickson’s conjecture). These efforts were successful – as the companion paper will reveal – but it turned out that in order to establish this number-theoretic conjecture, we had to first establish a purely dynamical quantitative result about polynomial sequences in nilmanifolds, very much in the spirit of the celebrated theorems of Marina Ratner on unipotent flows; I plan to discuss her theorems in more detail in a followup post to this one.In this post I will not discuss the number-theoretic applications or the connections with Ratner’s theorem, and instead describe our result from a slightly different viewpoint, starting from some very simple examples and gradually moving to the general situation considered in our paper.
To begin with, consider a infinite linear sequence $(n \alpha + \beta)_{n \in {\Bbb N}}$ in the unit circle ${\Bbb R}/{\Bbb Z}$, where $\alpha, \beta \in {\Bbb R}/{\Bbb Z}$. (One can think of this sequence as the orbit of $\beta$ under the action of the shift operator $T: x \mapsto x +\alpha$ on the unit circle.) This sequence can do one of two things:
1. If $\alpha$ is rational, then the sequence $(n \alpha + \beta)_{n \in {\Bbb N}}$ is periodic and thus only takes on finitely many values.
2. If $\alpha$ is irrational, then the sequence $(n \alpha + \beta)_{n \in {\Bbb N}}$ is dense in ${\Bbb R}/{\Bbb Z}$. In fact, it is not just dense, it is equidistributed, or equivalently that
$\displaystyle\lim_{N \to \infty} \frac{1}{N} \sum_{n=1}^N F( n \alpha + \beta ) = \int_{{\Bbb R}/{\Bbb Z}} F$
for all continuous functions $F: {\Bbb R}/{\Bbb Z} \to {\Bbb C}$. This statement is known as the equidistribution theorem.
We thus see that infinite linear sequences exhibit a sharp dichotomy in behaviour between periodicity and equidistribution; intermediate scenarios, such as concentration on a fractal set (such as a Cantor set), do not occur with linear sequences. This dichotomy between structure and randomness is in stark contrast to exponential sequences such as $( 2^n \alpha)_{n \in {\Bbb N}}$, which can exhibit an extremely wide spectrum of behaviours. For instance, the question of whether $(10^n \pi)_{n \in {\Bbb N}}$ is equidistributed mod 1 is an old unsolved problem, equivalent to asking whether $\pi$ is normal base 10.
Intermediate between linear sequences and exponential sequences are polynomial sequences $(P(n))_{n \in {\Bbb N}}$, where P is a polynomial with coefficients in ${\Bbb R}/{\Bbb Z}$. A famous theorem of Weyl asserts that infinite polynomial sequences enjoy the same dichotomy as their linear counterparts, namely that they are either periodic (which occurs when all non-constant coefficients are rational) or equidistributed (which occurs when at least one non-constant coefficient is irrational). Thus for instance the fractional parts $\{ \sqrt{2}n^2\}$ of $\sqrt{2} n^2$ are equidistributed modulo 1. This theorem is proven by Fourier analysis combined with non-trivial bounds on Weyl sums.
For our applications, we are interested in strengthening these results in two directions. Firstly, we wish to generalise from polynomial sequences in the circle ${\Bbb R}/{\Bbb Z}$ to polynomial sequences $(g(n)\Gamma)_{n \in {\Bbb N}}$ in other homogeneous spaces, in particular nilmanifolds. Secondly, we need quantitative equidistribution results for finite orbits $(g(n)\Gamma)_{1 \leq n \leq N}$ rather than qualitative equidistribution for infinite orbits $(g(n)\Gamma)_{n \in {\Bbb N}}$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 115, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.887397050857544, "perplexity": 244.73349283085517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703108201/warc/CC-MAIN-20130516111828-00020-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://hilbertthm90.wordpress.com/category/math/number-theory/
|
# An Application of p-adic Volume to Minimal Models
Today I’ll sketch a proof of Ito that birational smooth minimal models have all of their Hodge numbers exactly the same. It uses the ${p}$-adic integration from last time plus one piece of heavy machinery.
First, the piece of heavy machinery: If ${X, Y}$ are finite type schemes over the ring of integers ${\mathcal{O}_K}$ of a number field whose generic fibers are smooth and proper, then if ${|X(\mathcal{O}_K/\mathfrak{p})|=|Y(\mathcal{O}_K/\mathfrak{p})|}$ for all but finitely many prime ideals, ${\mathfrak{p}}$, then the generic fibers ${X_\eta}$ and ${Y_\eta}$ have the same Hodge numbers.
If you’ve seen these types of hypotheses before, then there’s an obvious set of theorems that will probably be used to prove this (Chebotarev + Hodge-Tate decomposition + Weil conjectures). Let’s first restrict our attention to a single prime. Since we will be able to throw out bad primes, suppose we have ${X, Y}$ smooth, proper varieties over ${\mathbb{F}_q}$ of characteristic ${p}$.
Proposition: If ${|X(\mathbb{F}_{q^r})|=|Y(\mathbb{F}_{q^r})|}$ for all ${r}$, then ${X}$ and ${Y}$ have the same ${\ell}$-adic Betti numbers.
This is a basic exercise in using the Weil conjectures. First, ${X}$ and ${Y}$ clearly have the same Zeta functions, because the Zeta function is defined entirely by the number of points over ${\mathbb{F}_{q^r}}$. But the Zeta function decomposes
$\displaystyle Z(X,t)=\frac{P_1(t)\cdots P_{2n-1}(t)}{P_0(t)\cdots P_{2n}(t)}$
where ${P_i}$ is the characteristic polynomial of Frobenius acting on ${H^i(X_{\overline{\mathbb{F}_q}}, \mathbb{Q}_\ell)}$. The Weil conjectures tell us we can recover the ${P_i(t)}$ if we know the Zeta function. But now
$\displaystyle \dim H^i(X_{\overline{\mathbb{F}_q}}, \mathbb{Q}_\ell)=\deg P_i(t)=H^i(Y_{\overline{\mathbb{F}_q}}, \mathbb{Q}_\ell)$
and hence the Betti numbers are the same. Now let’s go back and notice the magic of ${\ell}$-adic cohomology. If ${X}$ and ${Y}$ are as before over the ring of integers of a number field. Our assumption about the number of points over finite fields being the same for all but finitely many primes implies that we can pick a prime of good reduction and get that the ${\ell}$-adic Betti numbers of the reductions are the same ${b_i(X_p)=b_i(Y_p)}$.
One of the main purposes of ${\ell}$-adic cohomology is that it is “topological.” By smooth, proper base change we get that the ${\ell}$-adic Betti numbers of the geometric generic fibers are the same
$\displaystyle b_i(X_{\overline{\eta}})=b_i(X_p)=b_i(Y_p)=b_i(Y_{\overline{\eta}}).$
By the standard characteristic ${0}$ comparison theorem we then get that the singular cohomology is the same when base changing to ${\mathbb{C}}$, i.e.
$\displaystyle \dim H^i(X_\eta\otimes \mathbb{C}, \mathbb{Q})=\dim H^i(Y_\eta \otimes \mathbb{C}, \mathbb{Q}).$
Now we use the Chebotarev density theorem. The Galois representations on each cohomology have the same traces of Frobenius for all but finitely many primes by assumption and hence the semisimplifications of these Galois representations are the same everywhere! Lastly, these Galois representations are coming from smooth, proper varieties and hence the representations are Hodge-Tate. You can now read the Hodge numbers off of the Hodge-Tate decomposition of the semisimplification and hence the two generic fibers have the same Hodge numbers.
Alright, in some sense that was the “uninteresting” part, because it just uses a bunch of machines and is a known fact (there’s also a lot of stuff to fill in to the above sketch to finish the argument). Here’s the application of ${p}$-adic integration.
Suppose ${X}$ and ${Y}$ are smooth birational minimal models over ${\mathbb{C}}$ (for simplicity we’ll assume they are Calabi-Yau, Ito shows how to get around not necessarily having a non-vanishing top form). I’ll just sketch this part as well, since there are some subtleties with making sure you don’t mess up too much in the process. We can “spread out” our varieties to get our setup in the beginning. Namely, there are proper models over some ${\mathcal{O}_K}$ (of course they aren’t smooth anymore), where the base change of the generic fibers are isomorphic to our original varieties.
By standard birational geometry arguments, there is some big open locus (the complement has codimension greater than ${2}$) where these are isomorphic and this descends to our model as well. Now we are almost there. We have an etale isomorphism ${U\rightarrow V}$ over all but finitely many primes. If we choose nowhere vanishing top forms on the models, then the restrictions to the fibers are ${p}$-adic volume forms.
But our standard trick works again here. The isomorphism ${U\rightarrow V}$ pulls back the volume form on ${Y}$ to a volume form on ${X}$ over all but finitely primes and hence they differ by a function which has ${p}$-adic valuation ${1}$ everywhere. Thus the two models have the same volume over all but finitely many primes, and as was pointed out last time the two must have the same number of ${\mathbb{F}_{q^r}}$-valued points over these primes since we can read this off from knowing the volume.
The machinery says that we can now conclude the two smooth birational minimal models have the same Hodge numbers. I thought that was a pretty cool and unexpected application of this idea of ${p}$-adic volume. It is the only one I know of. I’d be interested if anyone knows of any other.
I came across this idea a long time ago, but I needed the result that uses it in its proof again, so I was curious about figuring out what in the world is going on. It turns out that you can make “${p}$-adic measures” to integrate against on algebraic varieties. This is a pretty cool idea that I never would have guessed possible. I mean, maybe complex varieties or something, but over ${p}$-adic fields?
Let’s start with a pretty standard setup in ${p}$-adic geometry. Let ${K/\mathbb{Q}_p}$ be a finite extension and ${R}$ the ring of integers of ${K}$. Let ${\mathbb{F}_q=R_K/\mathfrak{m}}$ be the residue field. If this scares you, then just take ${K=\mathbb{Q}_p}$ and ${R=\mathbb{Z}_p}$.
Now let ${X\rightarrow Spec(R)}$ be a smooth scheme of relative dimension ${n}$. The picture to have in mind here is some smooth ${n}$-dimensional variety over a finite field ${X_0}$ as the closed fiber and a smooth characteristic ${0}$ version of this variety, ${X_\eta}$, as the generic fiber. This scheme is just interpolating between the two.
Now suppose we have an ${n}$-form ${\omega\in H^0(X, \Omega_{X/R}^n)}$. We want to say what it means to integrate against this form. Let ${|\cdot |_p}$ be the normalized ${p}$-adic valuation on ${K}$. We want to consider the ${p}$-adic topology on the set of ${R}$-valued points ${X(R)}$. This can be a little weird if you haven’t done it before. It is a totally disconnected, compact space.
The idea for the definition is the exact naive way of converting the definition from a manifold to this setting. Consider some point ${s\in X(R)}$. Locally in the ${p}$-adic topology we can find a “disk” containing ${s}$. This means there is some open ${U}$ about ${s}$ together with a ${p}$-adic analytic isomorphism ${U\rightarrow V\subset R^n}$ to some open.
In the usual way, we now have a choice of local coordinates ${x=(x_i)}$. This means we can write ${\omega|_U=fdx_1\wedge\cdots \wedge dx_n}$ where ${f}$ is a ${p}$-adic analytic on ${V}$. Now we just define
$\displaystyle \int_U \omega= \int_V |f(x)|_p dx_1 \cdots dx_n.$
Now maybe it looks like we’ve converted this to another weird ${p}$-adic integration problem that we don’t know how to do, but we the right hand side makes sense because ${R^n}$ is a compact topological group so we integrate with respect to the normalized Haar measure. Now we’re done, because modulo standard arguments that everything patches together we can define ${\int_X \omega}$ in terms of these local patches (the reason for being able to patch without bump functions will be clear in a moment, but roughly on overlaps the form will differ by a unit with valuation ${1}$).
This allows us to define a “volume form” for smooth ${p}$-adic schemes. We will call an ${n}$-form a volume form if it is nowhere vanishing (i.e. it trivializes ${\Omega^n}$). You might be scared that the volume you get by integrating isn’t well-defined. After all, on a real manifold you can just scale a non-vanishing ${n}$-form to get another one, but the integral will be scaled by that constant.
We’re in luck here, because if ${\omega}$ and ${\omega'}$ are both volume forms, then there is some non-vanishing function such that ${\omega=f\omega'}$. Since ${f}$ is never ${0}$, it is invertible, and hence is a unit. This means ${|f(x)|_p=1}$, so since we can only get other volume forms by scaling by a function with ${p}$-adic valuation ${1}$ everywhere the volume is a well-defined notion under this definition! (A priori, there could be a bunch of “different” forms, though).
It turns out to actually be a really useful notion as well. If we want to compute the volume of ${X/R}$, then there is a natural way to do it with our set-up. Consider the reduction mod ${\mathfrak{m}}$ map ${\phi: X(R)\rightarrow X(\mathbb{F}_q)}$. The fiber over any point is a ${p}$-adic open set, and they partition ${X(R)}$ into a disjoint union of ${|X(\mathbb{F}_q)|}$ mutually isomorphic sets (recall the reduction map is surjective here by the relevant variant on Hensel’s lemma). Fix one point ${x_0\in X(\mathbb{F}_q)}$, and define ${U:=\phi^{-1}(x_0)}$. Then by the above analysis we get
$\displaystyle Vol(X)=\int_X \omega=|X(\mathbb{F}_q)|\int_{U}\omega$
All we have to do is compute this integral over one open now. By our smoothness hypothesis, we can find a regular system of parameters ${x_1, \ldots, x_n\in \mathcal{O}_{X, x_0}}$. This is a legitimate choice of coordinates because they define a ${p}$-adic analytic isomorphism with ${\mathfrak{m}^n\subset R^n}$.
Now we use the same silly trick as before. Suppose ${\omega=fdx_1\wedge \cdots \wedge dx_n}$, then since ${\omega}$ is a volume form, ${f}$ can’t vanish and hence ${|f(x)|_p=1}$ on ${U}$. Thus
$\displaystyle \int_{U}\omega=\int_{\mathfrak{m}^n}dx_1\cdots dx_n=\frac{1}{q^n}$
This tells us that no matter what ${X/R}$ is, if there is a volume form (which often there isn’t), then the volume
$\displaystyle Vol(X)=\frac{|X(\mathbb{F}_q)|}{q^n}$
just suitably multiplies the number of ${\mathbb{F}_q}$-rational points there are by a factor dependent on the size of the residue field and the dimension of ${X}$. Next time we’ll talk about the one place I know of that this has been a really useful idea.
# BSD for a Large Class of Elliptic Curves
I’m giving up on the p-divisible group posts for awhile. I would have to be too technical and tedious to write anything interesting about enlarging the base. It is pretty fascinating stuff, but not blog material at the moment.
I’ve been playing around with counting fibration structures on K3 surfaces, and I just noticed something I probably should have been aware of for a long time. This is totally well-known, but I’ll give a slightly anachronistic presentation so that we can use results from 2013 to prove the Birch and Swinnerton-Dyer conjecture!! … Well, only in a case that has been known since 1973 when it was published by Artin and Swinnerton-Dyer.
Let’s recall the Tate conjecture for surfaces. Let ${k}$ be a finite field and ${X/k}$ a smooth, projective surface. We’ve written this down many times now, but the long exact sequence associate to the Kummer sequence
$\displaystyle 0\rightarrow \mu_{\ell}\rightarrow \mathbb{G}_m\rightarrow \mathbb{G}_m\rightarrow 0$
(for ${\ell\neq \text{char}(k)}$) gives us a cycle class map
$\displaystyle c_1: Pic(X_{\overline{k}})\otimes \mathbb{Q}_{\ell}\rightarrow H^2_{et}(X_{\overline{k}}, \mathbb{Q}_\ell(1))$
In fact, we could take Galois invariants to get our standard
$\displaystyle 0\rightarrow Pic(X)\otimes \mathbb{Q}_{\ell}\rightarrow H^2_{et}(X_{\overline{k}}, \mathbb{Q}_\ell(1))^G\rightarrow Br(X)[\ell^\infty]\rightarrow 0$
The Tate conjecture is in some sense the positive characteristic version of the Hodge conjecture. It conjectures that the first map is surjective. In other words, whenever an ${\ell}$-adic class “looks like” it could come from an honest geometric thing, then it does. But if the Tate conjecture is true, then this implies the ${\ell}$-primary part of ${Br(X)}$ is finite. We could spend some time worrying about independence of ${\ell}$, but it works, and hence the Tate conjecture is actually equivalent to finiteness of ${Br(X)}$.
Suppose now that ${X}$ is an elliptic K3 surface. This just means that there is a flat map ${X\rightarrow \mathbb{P}^1}$ where the fibers are elliptic curves (there are some degenerate fibers, but after some heavy machinery we could always put this into some nice form, we’re sketching an argument here so we won’t worry about the technical details of what we want “fibration” to mean). The generic fiber ${X_\eta}$ is a genus ${1}$ curve that does not necessarily have a rational point and hence is not necessarily an elliptic curve.
But we can just use a relative version of the Jacobian construction to produce a new fibration ${J\rightarrow \mathbb{P}^1}$ where ${J}$ is a K3 surface fiberwise isomorphic to ${X}$, but now ${J_\eta=Jac(X_\eta)}$ and hence is an elliptic curve. Suppose we want to classify elliptic fibrations that have ${J}$ as the relative Jacobian. We have two natural ideas to do this.
The first is that etale locally such a fibration is trivial, so you could consider all glueing data to piece such a thing together. The obstruction will be some Cech class that actually lives in ${H^2(X, \mathbb{G}_m)=Br(X)}$. In fancy language, you make these things as ${\mathbb{G}_m}$-gerbes which are just twisted relative moduli of sheaves. The class in ${Br(X)}$ is giving you the obstruction the existence of a universal sheaf.
A more number theoretic way to think about this is that rather than think about surfaces over ${k}$, we work with the generic fiber ${X_\eta/k(t)}$. It is well-known that the Weil-Chatelet group: ${H^1(Gal(k(t)^{sep}/k(t), J_\eta)}$ gives you the possible genus ${1}$ curves that could occur as generic fibers of such fibrations. This group is way too big though, because we only want ones that are locally trivial everywhere (otherwise it won’t be a fibration).
So it shouldn’t be surprising that the classification of such things is given by the Tate-Shafarevich group:
Ш $\displaystyle (J_\eta /k(t))=ker ( H^1(G, J_\eta)\rightarrow \prod H^1(G_v, (J_\eta)_v))$
Very roughly, I’ve now given a heuristic argument (namely that they both classify the same set of things) that ${Br(X)\simeq}$ Ш ${(J_\eta)}$, and it turns out that Grothendieck proved the natural map that comes form the Leray spectral sequence ${Br(X)\rightarrow}$ Ш${(J_\eta)}$ is an isomorphism (this rigorous argument might actually have been easier than the heuristic one because we’ve computed everything involved in previous posts, but it doesn’t give you any idea why one might think they are the same).
Theorem: If ${E/\mathbb{F}_q(t)}$ is an elliptic curve of height ${2}$ (occuring as the generic fiber of an elliptic K3 surface), then ${E}$ satisfies the Birch and Swinnerton-Dyer conjecture.
Idea: Using the machinery alluded to before, we spread out ${E}$ to an elliptic K3 surface ${X\rightarrow \mathbb{P}^1}$ over a finite field. As of this year, it seems the Tate conjecture is true for K3 surfaces (the proofs are all there, I’m not sure if they have been double checked and published yet). Thus ${Br(X)}$ is finite. Thus Ш${ (E)}$ is finite. But now it is well-known that if Ш${ (E)}$ being finite is equivalent to the Birch and Swinnerton-Dyer conjecture.
# Newton Polygons of p-Divisible Groups
I really wanted to move on from this topic, because the theory gets much more interesting when we move to ${p}$-divisible groups over some larger rings than just algebraically closed fields. Unfortunately, while looking over how Demazure builds the theory in Lectures on ${p}$-divisible Groups, I realized that it would be a crime to bring you this far and not concretely show you the power of thinking in terms of Newton polygons.
As usual, let’s fix an algebraically closed field of positive characteristic to work over. I was vague last time about the anti-equivalence of categories between ${p}$-divisible groups and ${F}$-crystals mostly because I was just going off of memory. When I looked it up, I found out I was slightly wrong. Let’s compute some examples of some slopes.
Recall that ${D(\mu_{p^\infty})\simeq W(k)}$ and ${F=p\sigma}$. In particular, ${F(1)=p\cdot 1}$, so in our ${F}$-crystal theory we get that the normalized ${p}$-adic valuation of the eigenvalue ${p}$ of ${F}$ is ${1}$. Recall that we called this the slope (it will become clear why in a moment).
Our other main example was ${D(\mathbb{Q}_p/\mathbb{Z}_p)\simeq W(k)}$ with ${F=\sigma}$. In this case we have ${1}$ is “the” eigenvalue which has ${p}$-adic valuation ${0}$. These slopes totally determine the ${F}$-crystal up to isomorphism, and the category of ${F}$-crystals (with slopes in the range ${0}$ to ${1}$) is anti-equivalent to the category of ${p}$-divisible groups.
The Dieudonné-Manin decomposition says that we can always decompose ${H=D(G)\otimes_W K}$ as a direct sum of vector spaces indexed by these slopes. For example, if I had a height three ${p}$-divisible group, ${H}$ would be three dimensional. If it decomposed as ${H_0\oplus H_1}$ where ${H_0}$ was ${2}$-dimensional (there is a repeated ${F}$-eigenvalue of slope ${0}$), then ${H_1}$ would be ${1}$-dimensional, and I could just read off that my ${p}$-divisible group must be isogenous to ${G\simeq \mu_{p^\infty}\oplus (\mathbb{Q}_p/\mathbb{Z}_p)^2}$.
In general, since we have a decomposition ${H=H_0\oplus H' \oplus H_1}$ where ${H'}$ is the part with slopes strictly in ${(0,1)}$ we get a decomposition ${G\simeq (\mu_{p^\infty})^{r_1}\oplus G' \oplus (\mathbb{Q}_p/\mathbb{Z}_p)^{r_0}}$ where ${r_j}$ is the dimension of ${H_j}$ and ${G'}$ does not have any factors of those forms.
This is where the Newton polygon comes in. We can visually arrange this information as follows. Put the slopes of ${F}$ in increasing order ${\lambda_1, \ldots, \lambda_r}$. Make a polygon in the first quadrant by plotting the points ${P_0=(0,0)}$, ${P_1=(\dim H_{\lambda_1}, \lambda_1 \dim H_{\lambda_1})}$, … , ${\displaystyle P_j=\left(\sum_{l=1}^j\dim H_{\lambda_l}, \sum_{l=1}^j \lambda_l\dim H_{\lambda_l}\right)}$.
This might look confusing, but all it says is to get from ${P_{j}}$ to ${P_{j+1}}$ make a line segment of slope ${\lambda_j}$ and make the segment go to the right for ${\dim H_{\lambda_j}}$. This way you visually encode the slope with the actual slope of the segment, and the longer the segment is the bigger the multiplicity of that eigenvalue.
But this way of encoding the information gives us something even better, because it turns out that all these ${P_i}$ must have integer coordinates (a highly non-obvious fact proved in the book by Demazure listed above). This greatly restricts our possibilities for Dieudonné ${F}$-crystals. Consider the height ${2}$ case. We have ${H}$ is two dimensional, so we have ${2}$ slopes (possibly the same). The maximal ${y}$ coordinate you could ever reach is if both slopes were maximal which is ${1}$. In that case you just get the line segment from ${(0,0)}$ to ${(2,2)}$. The lowest you could get is if the slopes were both ${0}$ in which case you get a line segment ${(0,0)}$ to ${(2,0)}$.
Every other possibility must be a polygon between these two with integer breaking points and increasing order of slopes. Draw it (or if you want to cheat look below). You will see that there are obviously only two other possibilities. The one that goes ${(0,0)}$ to ${(1,0)}$ to ${(2,1)}$ which is a slope ${0}$ and slope ${1}$ and corresponds to ${\mu_{p^\infty}\oplus \mathbb{Q}_p/\mathbb{Z}_p}$ and the one that goes ${(0,0)}$ to ${(2,1)}$. This corresponds to a slope ${1/2}$ with multiplicity ${2}$. This corresponds to the ${E[p^\infty]}$ for supersingular elliptic curves. That recovers our list from last time.
We now just have a bit of a game to determine all height ${3}$ ${p}$-divisible groups up to isogeny (and it turns out in this small height case that determines them up to isomorphism). You can just draw all the possibilities for Newton polygons as in the height ${2}$ case to see that the only ${6}$ possibilities are ${(\mu_{p^\infty})^3}$, ${(\mu_{p^\infty})^2\oplus \mathbb{Q}_p/\mathbb{Z}_p}$, ${\mu_{p^\infty}\oplus (\mathbb{Q}_p/\mathbb{Z}_p)^2}$, ${(\mathbb{Q}_p/\mathbb{Z}_p)^3}$, and then two others: ${G_{1/3}}$ which corresponds to the thing with a triple eigenvalue of slope ${1/3}$ and ${G_{2/3}}$ which corresponds to the thing with a triple eigenvalue of slope ${2/3}$.
To finish this post (and hopefully topic!) let’s bring this back to elliptic curves one more time. It turns out that ${D(E[p^\infty])\simeq H^1_{crys}(E/W)}$. Without reminding you of the technical mumbo-jumbo of crystalline cohomology, let’s think why this might be reasonable. We know ${E[p^\infty]}$ is always height ${2}$, so ${D(E[p^\infty])}$ is rank ${2}$. But if we consider that crystalline cohomology should be some sort of ${p}$-adic cohomology theory that “remembers topological information” (whatever that means), then we would guess that some topological ${H^1}$ of a “torus” should be rank ${2}$ as well.
Moreover, the crystalline cohomology comes with a natural Frobenius action. But if we believe there is some sort of Weil conjecture magic that also applies to crystalline cohomology (I mean, it is a Weil cohomology theory), then we would have to believe that the product of the eigenvalues of this Frobenius equals ${p}$. Recall in the “classical case” that the characteristic polynomial has the form ${x^2-a_px+p}$. So there are actually only two possibilities in this case, both slope ${1/2}$ or one of slope ${1}$ and the other of slope ${0}$. As we’ve noted, these are the two that occur.
In fact, this is a more general phenomenon. When thinking about ${p}$-divisible groups arising from algebraic varieties, because of these Weil conjecture type considerations, the Newton polygons must actually fit into much narrower regions and sometimes this totally forces the whole thing. For example, the enlarged formal Brauer group of an ordinary K3 surface has height ${22}$, but the whole Newton polygon is fully determined by having to fit into a certain region and knowing its connected component.
# More Classification of p-Divisible Groups
Today we’ll look a little more closely at ${A[p^\infty]}$ for abelian varieties and finish up a different sort of classification that I’ve found more useful than the one presented earlier as triples ${(M,F,V)}$. For safety we’ll assume ${k}$ is algebraically closed of characteristic ${p>0}$ for the remainder of this post.
First, let’s note that we can explicitly describe all ${p}$-divisible groups over ${k}$ up to isomorphism (of any dimension!) up to height ${2}$ now. This is basically because height puts a pretty tight constraint on dimension: ${ht(G)=\dim(G)+\dim(G^D)}$. If we want to make this convention, we’ll say ${ht(G)=0}$ if and only if ${G=0}$, but I’m not sure it is useful anywhere.
For ${ht(G)=1}$ we have two cases: If ${\dim(G)=0}$, then it’s dual must be the unique connected ${p}$-divisible group of height ${1}$, namely ${\mu_{p^\infty}}$ and hence ${G=\mathbb{Q}_p/\mathbb{Z}_p}$. The other case we just said was ${\mu_{p^\infty}}$.
For ${ht(G)=2}$ we finally get something a little more interesting, but not too much more. From the height ${1}$ case we know that we can make three such examples: ${(\mu_{p^\infty})^{\oplus 2}}$, ${\mu_{p^\infty}\oplus \mathbb{Q}_p/\mathbb{Z}_p}$, and ${(\mathbb{Q}_p/\mathbb{Z}_p)^{\oplus 2}}$. These are dimensions ${2}$, ${1}$, and ${0}$ respectively. The first and last are dual to each other and the middle one is self-dual. Last time we said there was at least one more: ${E[p^\infty]}$ for a supersingular elliptic curve. This was self-dual as well and the unique one-dimensional connected height ${2}$ ${p}$-divisible group. Now just playing around with the connected-étale decomposition, duals, and numerical constraints we get that this is the full list!
If we could get a bit better feel for the weird supersingular ${E[p^\infty]}$ case, then we would have a really good understanding of all ${p}$-divisible groups up through height ${2}$ (at least over algebraically closed fields).
There is an invariant called the ${a}$-number for abelian varieties defined by ${a(A)=\dim Hom(\alpha_p, A[p])}$. This essentially counts the number of copies of ${\alpha_p}$ sitting inside the truncated ${p}$-divisible group. Let’s consider the elliptic curve case again. If ${E/k}$ is ordinary, then we know ${E[p]}$ explicitly and hence can argue that ${a(E)=0}$. For the supersingular case we have that ${E[p]}$ is actually a non-split semi-direct product of ${\alpha_p}$ by itself and we get that ${a(E)=1}$. This shows that the ${a}$-number is an invariant that is equivalent to knowing ordinary/supersingular.
This is a phenomenon that generalizes. For an abelian variety ${A/k}$ we get that ${A}$ is ordinary if and only if ${a(A)=0}$ in which case the ${p}$-divisible group is a bunch of copies of ${E[p^\infty]}$ for an ordinary elliptic curve, i.e. ${A[p^\infty]\simeq E[p^\infty]^g}$. On the other hand, ${A}$ is supersingular if and only if ${A[p^\infty]\simeq E[p^\infty]^g}$ for ${E/k}$ supersingular (these two facts are pretty easy if you use the ${p}$-rank as the definition of ordinary and supersingular because it tells you the étale part and you mess around with duals and numerics again).
Now that we’ve beaten that dead horse beyond recognition, I’ll point out one more type of classification which is the one that comes up most often for me. In general, there is not redundant information in the triple ${(M, F, V)}$, but for special classes of ${p}$-divisible groups (for example the ones I always work with explained here) all you need to remember is the ${(M, F)}$ to recover ${G}$ up to isomorphism.
A pair ${(M,F)}$ of a free, finite rank ${W}$-module equipped with a ${\phi}$-linear endomorphism ${F}$ is sometimes called a Cartier module or ${F}$-crystal. Every Dieudonné module of a ${p}$-divisible group is an example of one of these. We could also consider ${H=M\otimes_W K}$ where ${K=Frac(W)}$ to get a finite dimensional vector space in characteristic ${0}$ with a ${\phi}$-linear endomorphism preserving the ${W}$-lattice ${M\subset H}$.
Passing to this vector space we would expect to lose some information and this is usually called the associated ${F}$-isocrystal. But doing this gives us a beautiful classification theorem which was originally proved by Diedonné and Manin. We have that ${H}$ is naturally an ${A}$-module where ${A=K[T]}$ is the noncommutative polynomial ring ${T\cdot a=\phi(a)\cdot T}$. The classification is to break up ${H\simeq \oplus H_\alpha}$ into a slope decomposition.
These ${\alpha}$ are just rational numbers corresponding to the slopes of the ${F}$ operator. The eigenvalues ${\lambda_1, \ldots, \lambda_n}$ of ${F}$ are not necessarily well-defined, but if we pick the normalized valuation ${ord(p)=1}$, then the valuations of the eigenvalues are well-defined. Knowing the slopes and multiplicities completely determines ${H}$ up to isomorphism, so we can completely capture the information of ${H}$ in a simple Newton polygon. Note that when ${H}$ is the ${F}$-isocrystal of some Dieudonné module, then the relation ${FV=VF=p}$ forces all slopes to be between 0 and 1.
Unfortunately, knowing ${H}$ up to isomorphism only determines ${M}$ up to equivalence. This equivalence is easily seen to be the same as an injective map ${M\rightarrow M'}$ whose cokernel is a torsion ${W}$-module (that way it becomes an isomorphism when tensoring with ${K}$). But then by the anti-equivalence of categories two ${p}$-divisible groups (in the special subcategory that allows us to drop the ${V}$) ${G}$ and ${G'}$ have equivalent Dieudonné modules if and only if there is a surjective map ${G' \rightarrow G}$ whose kernel is finite, i.e. ${G}$ and ${G'}$ are isogenous as ${p}$-divisible groups.
Despite the annoying subtlety in fully determining ${G}$ up to isomorphism, this is still really good. It says that just knowing the valuation of some eigenvalues of an operator on a finite dimensional characteristic ${0}$ vector space allows us to recover ${G}$ up to isogeny.
# A Quick User’s Guide to Dieudonné Modules of p-Divisible Groups
Last time we saw that if we consider a ${p}$-divisible group ${G}$ over a perfect field of characteristic ${p>0}$, that there wasn’t a whole lot of information that went into determining it up to isomorphism. Today we’ll make this precise. It turns out that up to isomorphism we can translate ${G}$ into a small amount of (semi-)linear algebra.
I’ve actually discussed this before here. But let’s not get bogged down in the details of the construction. The important thing is to see how to use this information to milk out some interesting theorems fairly effortlessly. Let’s recall a few things. The category of ${p}$-divisible groups is (anti-)equivalent to the category of Dieudonné modules. We’ll denote this functor ${G\mapsto D(G)}$.
Let ${W:=W(k)}$ be the ring of Witt vectors of ${k}$ and ${\sigma}$ be the natural Frobenius map on ${W}$. There are only a few important things that come out of the construction from which you can derive tons of facts. First, the data of a Dieudonné module is a free ${W}$-module, ${M}$, of finite rank with a Frobenius ${F: M\rightarrow M}$ which is ${\sigma}$-linear and a Verschiebung ${V: M\rightarrow M}$ which is ${\sigma^{-1}}$-linear satisfying ${FV=VF=p}$.
Fact 1: The rank of ${D(G)}$ is the height of ${G}$.
Fact 2: The dimension of ${G}$ is the dimension of ${D(G)/FD(G)}$ as a ${k}$-vector space (dually, the dimension of ${D(G)/VD(G)}$ is the dimension of ${G^D}$).
Fact 3: ${G}$ is connected if and only if ${F}$ is topologically nilpotent (i.e. ${F^nD(G)\subset pD(G)}$ for ${n>>0}$). Dually, ${G^D}$ is connected if and only if ${V}$ is topologically nilpotent.
Fact 4: ${G}$ is étale if and only if ${F}$ is bijective. Dually, ${G^D}$ is étale if and only if ${V}$ is bijective.
These facts alone allow us to really get our hands dirty with what these things look like and how to get facts back about ${G}$ using linear algebra. Let’s compute the Dieudonné modules of the two “standard” ${p}$-divisible groups: ${\mu_{p^\infty}}$ and ${\mathbb{Q}_p/\mathbb{Z}_p}$ over ${k=\mathbb{F}_p}$ (recall in this situation that ${W(k)=\mathbb{Z}_p}$).
Before starting, we know that the standard Frobenius ${F(a_0, a_1, \ldots, )=(a_0^p, a_1^p, \ldots)}$ and Verschiebung ${V(a_0, a_1, \ldots, )=(0, a_0, a_1, \ldots )}$ satisfy the relations to make a Dieudonné module (the relations are a little tricky to check because constant multiples ${c\cdot (a_0, a_1, \ldots )}$ for ${c\in W}$ involve Witt multiplication and should be done using universal properties).
In this case ${F}$ is bijective so the corresponding ${G}$ must be étale. Also, ${VW\subset pW}$ so ${V}$ is topologically nilpotent which means ${G^D}$ is connected. Thus we have a height one, étale ${p}$-divisible group with one-dimensional, connected dual which means that ${G=\mathbb{Q}_p/\mathbb{Z}_p}$.
Now we’ll do ${\mu_{p^\infty}}$. Fact 1 tells us that ${D(\mu_{p^\infty})\simeq \mathbb{Z}_p}$ because it has height ${1}$. We also know that ${F: \mathbb{Z}_p\rightarrow \mathbb{Z}_p}$ must have the property that ${\mathbb{Z}_p/F(\mathbb{Z}_p)=\mathbb{F}_p}$ since ${\mu_{p^\infty}}$ has dimension ${1}$. Thus ${F=p\sigma}$ and hence ${V=\sigma^{-1}}$.
The proof of the anti-equivalence proceeds by working at finite stages and taking limits. So it turns out that the theory encompasses a lot more at the finite stages because ${\alpha_{p^n}}$ are perfectly legitimate finite, ${p}$-power rank group schemes (note the system does not form a ${p}$-divisible group because multiplication by ${p}$ is the zero morphism). Of course taking the limit ${\alpha_{p^\infty}}$ is also a formal ${p}$-torsion group scheme. If we wanted to we could build the theory of Dieudonné modules to encompass these types of things, but in the limit process we would have finite ${W}$-module which are not necessarily free and we would get an extra “Fact 5” that ${D(G)}$ is free if and only if ${G}$ is ${p}$-divisible.
Let’s do two more things which are difficult to see without this machinery. For these two things we’ll assume ${k}$ is algebraically closed. There is a unique connected, ${1}$-dimensional ${p}$-divisible of height ${h}$ over ${k}$. I imagine without Dieudonné theory this would be quite difficult, but it just falls right out by playing with these facts.
Since ${D(G)/FD(G)\simeq k}$ we can choose a basis, ${D(G)=We_1\oplus \cdots \oplus We_h}$, so that ${F(e_j)=e_{j+1}}$ and ${F(e_h)=pe_1}$. Up to change of coordinates, this is the only way that eventually ${F^nD(G)\subset pD(G)}$ (in fact ${F^hD(G)\subset pD(G)}$ is the smallest ${n}$). This also determines ${V}$ (note these two things need to be justified, I’m just asserting it here). But all the phrase “up to change of coordinates” means is that any other such ${(D(G'),F',V')}$ will be isomorphic to this one and hence by the equivalence of categories ${G\simeq G'}$.
Suppose that ${E/k}$ is an elliptic curve. Now we can determine ${E[p^\infty]}$ up to isomorphism as a ${p}$-divisible group, a task that seemed out of reach last time. We know that ${E[p^\infty]}$ always has height ${2}$ and dimension ${1}$. In previous posts, we saw that for an ordinary ${E}$ we have ${E[p^\infty]^{et}\simeq \mathbb{Q}_p/\mathbb{Z}_p}$ (we calculated the reduced part by using flat cohomology, but I’ll point out why this step isn’t necessary in a second).
Thus for an ordinary ${E/k}$ we get that ${E[p^\infty]\simeq E[p^\infty]^0\oplus \mathbb{Q}_p/\mathbb{Z}_p}$ by the connected-étale decomposition. But height and dimension considerations tell us that ${E[p^\infty]^0}$ must be the unique height ${1}$, connected, ${1}$-dimensional ${p}$-divisible group, i.e. ${\mu_{p^\infty}}$. But of course we’ve been saying this all along: ${E[p^\infty]\simeq \mu_{p^\infty}\oplus \mathbb{Q}_p/\mathbb{Z}_p}$.
If ${E/k}$ is supersingular, then we’ve also calculated previously that ${E[p^\infty]^{et}=0}$. Thus by the connected-étale decomposition we get that ${E[p^\infty]\simeq E[p^\infty]^0}$ and hence must be the unique, connected, ${1}$-dimensional ${p}$-divisible group of height ${2}$. For reference, since ${ht(G)=\dim(G)+\dim(G^D)}$ we see that ${G^D}$ is also of dimension ${1}$ and height ${2}$. If it had an étale part, then it would have to be ${\mu_{p^\infty}\oplus \mathbb{Q}_p/\mathbb{Z}_p}$ again, so ${G^D}$ must be connected as well and hence is the unique such group, i.e. ${G\simeq G^D}$. It is connected with connected dual. This gives us our first non-obvious ${p}$-divisible group since it is not just some split extension of ${\mu_{p^\infty}}$‘s and ${\mathbb{Q}_p/\mathbb{Z}_p}$‘s.
If we hadn’t done these previous calculations, then we could still have gotten these results by a slightly more general argument. Given an abelian variety ${A/k}$ we have that ${A[p^\infty]}$ is a ${p}$-divisible group of height ${2g}$ where ${g=\dim A}$. Using Dieudonné theory we can abstractly argue that ${A[p^\infty]^{et}}$ must have height less than or equal to ${g}$. So in the case of an elliptic curve it is ${1}$ or ${0}$ corresponding to the ordinary or supersingular case respectively, and the proof would be completed because ${\mathbb{Q}_p/\mathbb{Z}_p}$ is the unique étale, height ${1}$, ${p}$-divisible group.
# p-Divisible Groups Revisited 1
I’ve posted about ${p}$-divisible groups all over the place over the past few years (see: here, here, and here). I’ll just do a quick recap here on the “classical setting” to remind you of what we know so far. This will kick-start a series on some more subtle aspects I’d like to discuss which are kind of scary at first.
Suppose ${G}$ is a ${p}$-divisible group over ${k}$, a perfect field of characteristic ${p>0}$. We can be extremely explicit in classifying all such objects. Recall that ${G}$ is just an injective limit of group schemes ${G=\varinjlim G_\nu}$ where we have an exact sequence ${0\rightarrow G_\nu \rightarrow G_{\nu+1}\stackrel{p^\nu}{\rightarrow} G_{\nu+1}}$ and there is a fixed integer ${h}$ such that group schemes ${G_{\nu}}$ are finite of rank ${p^{\nu h}}$.
As a corollary to the standard connected-étale sequence for group schemes we get a canonical decomposition called the connected-étale sequence:
$\displaystyle 0\rightarrow G^0 \rightarrow G \rightarrow G^{et} \rightarrow 0$
where ${G^0}$ is connected and ${G^{et}}$ is étale. Since ${k}$ was assumed to be perfect, this sequence actually splits. Thus ${G}$ is a semi-direct product of an étale ${p}$-divisible group and a connected ${p}$-divisible group. If you’ve seen the theory for finite, flat group schemes, then you’ll know that we usually decompose these two categories even further so that we get a piece that is connected with connected dual, connected with étale dual, étale with connected dual, and étale with étale dual.
The standard examples to keep in mind for these four categories are ${\alpha_p}$, ${\mu_p}$, ${\mathbb{Z}/p}$, and ${\mathbb{Z}/\ell}$ for ${\ell\neq p}$ respectively. When we restrict ourselves to ${p}$-divisible groups the last category can’t appear in the decomposition of ${G_\nu}$ (since étale things are dimension 0, if something and its dual are both étale, then it would have to have height 0). I think it is not a priori clear, but the four category decomposition is a direct sum decomposition, and hence in this case we get that ${G\simeq G^0\oplus G^{et}}$ giving us a really clear idea of what these things look like.
As usual we can describe étale group schemes in a nice way because they are just constant after base change. Thus the functor ${G^{et}\mapsto G^{et}(\overline{k})}$ is an equivalence of categories between étale ${p}$-divisible groups and the category of inverse systems of ${Gal(\overline{k}/k)}$-sets of order ${p^{\nu h}}$. Thus, after sufficient base change, we get an abstract isomorphism with the constant group scheme ${\prod \mathbb{Q}_p/\mathbb{Z}_p}$ for some product (for the ${p}$-divisible group case it will be a finite direct sum).
All we have left now is to describe the possibilities for ${G^0}$, but this is a classical result as well. There is an equivalence of categories between the category of divisible, commutative, formal Lie groups and connected ${p}$-divisible groups given simply by taking the colimit of the ${p^n}$-torsion ${A\mapsto \varinjlim A[p^n]}$. The canonical example to keep in mind is ${\varinjlim \mathbb{G}_m[p^n]=\mu_{p^\infty}}$. This is connected only because in characteristic ${p}$ we have ${(x^p-1)=(x-1)^p}$, so ${\mu_{p^n}=Spec(k[x]/(x-1)^{p^n})}$. In any other characteristic this group scheme would be étale and totally disconnected.
This brings us to the first subtlety which can cause a lot of confusion because of the abuse of notation. A few times ago we talked about the fact that ${E[p]}$ for an elliptic curve was either ${\mathbb{Z}/p}$ or ${0}$ depending on whether or not it was ordinary or supersingular (respectively). It is dangerous to write this, because here we mean ${E}$ as a group (really ${E(\overline{k})}$) and ${E[p]}$ the ${p}$-torsion in this group.
When talking about the ${p}$-divisible group ${E[p^\infty]=\varinjlim E[p^n]}$ we are referring to ${E/k}$ as a group scheme and ${E[p^n]}$ as the (always!) non-trivial, finite, flat group scheme which is the kernel of the isogeny ${p^n: E\rightarrow E}$. The first way kills off the infinitesimal part so that we are just left with some nice reduced thing, and that’s why we can get ${0}$, because for a supersingular elliptic curve the group scheme ${E[p^n]}$ is purely infinitesimal, i.e. has trivial étale part.
Recall also that we pointed out that ${E[p]\simeq \mathbb{Z}/p}$ for an ordinary elliptic curve by using some flat cohomology trick. But this trick is only telling us that the reduced group is cyclic of order ${p}$, but it does not tell us the scheme structure. In fact, in this case ${E[p^n]\simeq \mu_{p^n}\oplus \mathbb{Z}/p^n}$ giving us ${E[p^\infty]\simeq \mu_{p^\infty}\oplus \mathbb{Q}_p/\mathbb{Z}_p}$. So this is a word of warning that when working these things out you need to be very careful that you understand whether or not you are figuring out the full group scheme structure or just reduced part. It can be hard to tell sometimes.
# Frobenius Semi-linear Algebra 2
Recall our setup. We have an algebraically closed field ${k}$ of characteristic ${p>0}$. We let ${V}$ be a finite dimensional ${k}$-vector space and ${\phi: V\rightarrow V}$ a ${p}$-linear map. Last time we left unfinished the Jordan decomposition that says that ${V=V_s\oplus V_n}$ where the two components are stable under ${\phi}$ and ${\phi}$ acts bijectively on ${V_s}$ and nilpotently on ${V_n}$.
We then considered a strange consequence of what happens on the part on which it acts bijectively. If ${\phi}$ is bijective, then there always exists a full basis ${v_1, \ldots, v_n}$ that are fixed by ${\phi}$, i.e. ${\phi(v_i)=v_i}$. This is strange indeed, because in linear algebra this would force our operator to be the identity.
There is one more slightly more disturbing consequence of this. If ${\phi}$ is bijective, then ${\phi-Id}$ is always surjective. This is a trivial consequence of having a fixed basis. Let ${w\in V}$. We want to find some ${z}$ such that ${\phi(z)=w}$. Well, we just construct the coefficients in the fixed basis by hand. We know ${w=\sum c_i v_i}$ for some ${c_i\in k}$. If ${z=\sum a_i v_i}$ really satisfies ${\phi(z)-z=w}$, then by comparing coefficients such an element exists if and only if we can solve ${a_i^p-a_i=c_i}$. These are just polynomial equations, so we can solve this over our algebraically closed field to get our coefficients.
Strangely enough we really require algebraically closed and not merely perfect again, but the papers I’ve been reading explicitly require these facts over finite fields. Since they don’t give any references at all and just call these things “standard facts about ${p}$-linear algebra,” I’m not sure if there is a less stupid way to prove these things which work for arbitrary perfect fields. This is why you should give citations for things you don’t prove!!
Why do I call this disturbing? Well, these maps really do appear when doing long exact sequences in cohomology. Last time we saw that we could prove that ${E[p]\simeq \mathbb{Z}/p}$ for an ordinary elliptic curve from computing the kernel of ${C-Id}$ where ${C}$ was the Cartier operator. But we have to be really, really careful to avoid linear algebra tricks when these maps come up, because in this situation we have ${\phi -Id}$ is always a surjective map between finite dimensional vector spaces of the same dimension, but also always has a non-trivial kernel isomorphic to ${\mathbb{Z}/p\oplus \cdots \oplus \mathbb{Z}/p}$ where the number of factors is the dimension of ${V}$. Even though we have a surjective map in the long exact sequence between vector spaces of the same dimension, we cannot conclude that it is bijective!
Since everything we keep considering as real-life examples of semi-linear algebra has automatically been bijective (i.e. no nilpotent part), I haven’t actually been too concerned with the Jordan decomposition. But we may as well discuss it to round out the theory since people who work with ${p}$-Lie algebras care … I think?
The idea of the proof is simple and related to what we did last time. We look at iterates ${\phi^j}$ of our map. We get a descending chain ${\phi^j(V)\supset \phi^{j+1}(V)}$ and hence it stabilizes somewhere, since even though ${\phi}$ is not a linear map, the image is still a vector subspace of ${V}$. Let ${r}$ be the smallest integer such that ${\phi^r(V)=\phi^{r+1}(V)}$. This means that ${r}$ is also the smallest integer such that ${\ker\phi^r=\ker \phi^{r+1}}$.
Now we just take as our definition ${V_s=\phi^r(V)}$ and ${V_n=\ker \phi^r}$. Now by definition we get everything we want. It is just the kernel/image decomposition and hence a direct sum. By the choice of ${r}$ we certainly get that ${\phi}$ maps ${V_s}$ to ${V_s}$ and ${V_n}$ to ${V_n}$. Also, ${\phi|_{V_s}}$ is bijective by construction. Lastly, if ${v\in V_n}$, then ${\phi^j(v)=0}$ for some ${0\leq j\leq r}$ and hence ${\phi}$ is nilpotent on ${V_n}$. This is what we wanted to show.
Here’s how this comes up for ${p}$-Lie algebras. Suppose you have some Lie group ${G/k}$ with Lie algebra ${\mathfrak{g}}$. You have the standard ${p}$-power map which is ${p}$-linear on ${\mathfrak{g}}$. By the structure theorem above ${\mathfrak{g}\simeq \mathfrak{h}\oplus \mathfrak{f}}$. The Lie subalgebra ${\mathfrak{h}}$ is the part the ${p}$-power map acts bijectively on and is called the core of the Lie algebra.
Let ${X_1, \ldots, X_d}$ be a fixed basis of the core. We get a nice combinatorial classification of the Lie subalgebras of ${\mathfrak{h}}$. Let ${V=Span_{\mathbb{F}_p}\langle X_1, \ldots, X_d\rangle}$. The Lie subalgebras of ${\mathfrak{h}}$ are in bijective correspondence with the vector subspaces of ${V}$. In particular, the number of Lie subalgebras is finite and each occurs as a direct summand. The proof of this fact is to just repeat the argument of the Jordan decomposition for a Lie subalgebra and look at coefficients of the fixed basis.
# Frobenius Semi-linear Algebra: 1
Today I want to explain some “well-known” facts in semilinear algebra. Here’s the setup. For safety we’ll assume ${k}$ is algebraically closed of characteristic ${p>0}$ (but merely being perfect should suffice for the main point later). Let ${V}$ be a finite dimensional vector space over ${k}$. Consider some ${p}$-semilinear operator on ${V}$ say ${\phi: V\rightarrow V}$. The fact that we are working with ${p}$ instead of ${p^{-1}}$ is mostly to not scare people. I think ${p^{-1}}$ actually appears more often in the literature and the theory is equivalent by “dualizing.”
All this means is that it is a linear operator satisfying the usual properties ${\phi(v+w)=\phi(v)+\phi(w)}$, etc, except for the scalar rule in which we scale by a factor of ${p}$, so ${\phi(av)=a^p\phi(v)}$. This situation comes up surprisingly often in positive characteristic geometry, because often you want to analyze some long exact sequence in cohomology associated to a short exact sequence which involves the Frobenius map or the Cartier operator. The former will induce a ${p}$-linear map of vector spaces and the latter induces a ${p^{-1}}$-linear map.
The facts we’re going to look at I’ve found in three or so papers just saying “from a well-known fact about ${p^{-1}}$-linear operators…” I wish there was a book out there that developed this theory like a standard linear algebra text so that people could actually give references. The proof today is a modification of that given in Dieudonne’s Lie Groups and Lie Hyperalgebras over a Field of Characteristic ${p>0}$ II (section 10).
Let’s start with an example. In the one-dimensional case we have the following ${\phi: k\rightarrow k}$. If the map is non-trivial, then it is bijective. More importantly we can just write down every one of these because if ${\phi(1)=a}$, then
$\displaystyle \begin{array}{rcl} \phi(x) & = & \phi(x\cdot 1) \\ & = & x^p\phi(1) \\ & = & ax^p \end{array}$
In fact, we can always find some non-zero fixed element, because this amounts to solving ${ax^p-x=x(ax^{p-1}-1)=0}$, i.e. finding a solution to ${ax^{p-1}-1=0}$ which we can do by being algebraically closed. This element ${b}$ obviously serves as a basis for ${k}$, but to set up an analogy we also see that ${Span_{\mathbb{F}_p}(b)}$ are all of the fixed points of ${\phi}$. In general ${V}$ will breakup into parts. The part that ${\phi}$ acts bijectively on will always have a basis of fixed elements whose ${\mathbb{F}_p}$-span consists of exactly the fixed points of ${\phi}$. Of course, this could never happen in linear algebra because finding a fixed basis implies the operator is the identity.
Let’s start by proving this statement. Suppose ${\phi: V\rightarrow V}$ is a ${p}$-semilinear automorphism. We want to find a basis of fixed elements. We essentially mimic what we did before in a more complicated way. We induct on the dimension of ${V}$. If we can find a single ${v_1}$ fixed by ${\phi}$, then we would be done for the following reason. We kill off the span of ${v_1}$, then by the inductive hypothesis we can find ${v_2, \ldots, v_n}$ a fixed basis for the quotient. Together these make a fixed basis for all of ${V}$.
Now we need to find a single fixed ${v_1}$ by brute force. Consider any non-zero ${w\in V}$. We start taking iterates of ${w}$ under ${\phi}$. Eventually they will become linearly dependent, so we consider ${w, \phi(w), \ldots, \phi^k(w)}$ for the minimal ${k}$ such that this is a linearly dependent set. This means we can find some coefficients that are not all ${0}$ for which ${\sum a_j \phi^j(w)=0}$.
Let’s just see what must be true of some fictional ${v_1}$ in the span of these elements such that ${\phi(v_1)=v_1}$. Well, ${v_1=\sum b_j \phi^j(w)}$ must satisfy ${v_1=\phi(v_1)=\sum b_j^p \phi^{j+1}(w)}$.
To make this easier to parse, let’s specialize to the case that ${k=3}$. This means that ${a_0 w+a_1\phi(w)+a_2\phi^2(w)=0}$ and by assumption the coefficient on this top power can’t be zero, so we rewrite the top power ${\phi^2(w)=-(a_0/a_2)w - (a_1/a_2)\phi(w)}$.
The other equation is
$\displaystyle \begin{array}{rcl} b_0w+b_1\phi(w) & = & b_0^p\phi(w)+b_1^p\phi^2(w)\\ & = & -(a_0/a_2)b_1^pw +(b_0^p-(a_1/a_2)b_1^p)\phi(w) \end{array}$
Comparing coefficients ${b_0=-(a_0/a_2)b_1^p}$ and then forward substituting ${b_1=-(a_0/a_2)^pb_1^{p^2}-(a_1/a_2)b_1^p}$. Ah, but we know the ${a_j}$ and this only involves the unknown ${b_1}$. So since ${k}$ is algebraically closed we can solve to find such a ${b_1}$. Then since we wrote all our other coefficients in terms of ${b_1}$ we actually can produce a fixed ${v_1}$ by brute force determining the coefficients of the vector in terms of our linear dependence coefficients.
There was nothing special about ${k=3}$ here. In general, this trick will work because it only involves the fact that applying ${\phi}$ cycled the vectors forward by one which allows us to keep forward substituting all the equations from the comparison of coefficients to get everything in terms of the highest one including the highest one which transformed the problem into solving a single polynomial equation over our algebraically closed field.
This completes the proof that if ${\phi}$ is bijective, then there is a basis of fixed vectors. The fact that ${V^\phi=Span_{\mathbb{F}_p}(v_1, \ldots, v_n)}$ is pretty easy after that. Of course, the ${\mathbb{F}_p}$-span is contained in the fixed points because by definition the prime subfield of ${k}$ is exactly the fixed elements of ${x\mapsto x^p}$. On the other hand, if ${c=\sum a_jv_j}$ is fixed, then ${c=\phi(c)=\sum a_j^p \phi(v_j)=\sum a_j^p v_j}$ shows that all the coefficients must be fixed by Frobenius and hence in ${\mathbb{F}_p}$.
Here’s how this is useful. Recall the post on the fppf site. We said that if we wanted to understand the ${p}$-torsion of certain cohomology with coefficients in ${\mathbb{G}_m}$ (Picard group, Brauer group, etc), then we should look at the flat cohomology with coefficients in ${\mu_p}$. If we specialize to the case of curves we get an isomorphism ${H^1_{fl}(X, \mu_p)\simeq Pic(X)[p]}$.
Recall the exact sequence at the end of that post. It told us that via the ${d\log}$ map ${H^1_{fl}(X, \mu_p)=ker(C-I)=H^0(X, \Omega^1)^C}$. Now we have a ridiculously complicated way to prove the following well-known fact. If ${E}$ is an ordinary elliptic curve over an algebraically closed field of characteristic ${p>0}$, then ${E[p]\simeq \mathbb{Z}/p}$. In fact, we can prove something slightly more general.
By definition, a curve is of genus ${g}$ if ${H^0(X, \Omega^1)}$ is ${g}$-dimensional. We’ll say ${X}$ is ordinary if the Cartier operator ${C}$ is a ${p^{-1}}$-linear automorphism (I’m already sweeping something under the rug, because to even think of the Cartier operator acting on this cohomology group we need a hypothesis like ordinary to naturally identify some cohomology groups).
By the results in this post we know that the structure of ${H^0(X, \Omega^1)^C}$ as an abelian group is ${\mathbb{Z}/p\oplus \cdots \oplus \mathbb{Z}/p}$ where there are ${g}$ copies. Thus in more generality this tells us that ${Jac(X)[p]\simeq Pic(X)[p]\simeq H^0(X, \Omega^1)^C\simeq \mathbb{Z}/p\oplus \cdots \oplus \mathbb{Z}/p}$. In particular, since for an elliptic curve (genus 1) we have ${Jac(E)=E}$, this statement is exactly ${E[p]\simeq \mathbb{Z}/p}$.
This point is a little silly, because Silverman seems to just use this as the definition of an ordinary elliptic curve. Hartshorne uses the Hasse invariant in which case it is quite easy to derive that the Cartier operator is an automorphism (proof: it is Serre dual to the Frobenius which by the Hasse invariant definition is an automorphism). Using this definition, I’m actually not sure I’ve ever seen a derivation that ${E[p]\simeq \mathbb{Z}/p}$. I’d be interested if there is a lower level way of seeing it than going through this flat cohomology argument (Silverman cites a paper of Duering, but it’s in German).
# Serre-Tate Theory 2
I guess this will be the last post on this topic. I’ll explain a tiny bit about what goes into the proof of this theorem and then why anyone would care that such canonical lifts exist. On the first point, there are tons of details that go into the proof. For example, Nick Katz’s article, Serre-Tate Local Moduli, is 65 pages. It is quite good if you want to learn more about this. Also, Messing’s book The Crystals Associated to Barsotti-Tate Groups is essentially building the machinery for this proof which is then knocked off in an appendix. So this isn’t quick or easy by any means.
On the other hand, I think the idea of the proof is fairly straightforward. Let’s briefly recall last time. The situation is that we have an ordinary elliptic curve ${E_0/k}$ over an algebraically closed field of characteristic ${p>2}$. We want to understand ${Def_{E_0}}$, but in particular whether or not there is some distinguished lift to characteristic ${0}$ (this will be an element of ${Def_{E_0}(W(k))}$.
To make the problem more manageable we consider the ${p}$-divisible group ${E_0[p^\infty]}$ attached to ${E_0}$. In the ordinary case this is the enlarged formal Picard group. It is of height ${2}$ whose connected component is ${\widehat{Pic}_{E_0}\simeq\mu_{p^\infty}}$. There is a natural map ${Def_{E_0}\rightarrow Def_{E_0[p^\infty]}}$ just by mapping ${E/R \mapsto E[p^\infty]}$. Last time we said the main theorem was that this map is an isomorphism. To tie this back to the flat topology stuff, ${E_0[p^\infty]}$ is the group representing the functor ${A\mapsto H^1_{fl}(E_0\otimes A, \mu_{p^\infty})}$.
The first step in proving the main theorem is to note two things. In the (split) connected-etale sequence
$\displaystyle 0\rightarrow \mu_{p^\infty}\rightarrow E_0[p^\infty]\rightarrow \mathbb{Q}_p/\mathbb{Z}_p\rightarrow 0$
we have that ${\mu_{p^\infty}}$ is height one and hence rigid. We have that ${\mathbb{Q}_p/\mathbb{Z}_p}$ is etale and hence rigid. Thus given any deformation ${G/R}$ of ${E_0[p^\infty]}$ we can take the connected-etale sequence of this and see that ${G^0}$ is the unique deformation of ${\mu_{p^\infty}}$ over ${R}$ and ${G^{et}=\mathbb{Q}_p/\mathbb{Z}_p}$. Thus the deformation functor can be redescribed in terms of extension classes of two rigid groups ${R\mapsto Ext_R^1(\mathbb{Q}_p/\mathbb{Z}_p, \mu_{p^\infty})}$.
Now we see what the canonical lift is. Supposing our isomorphism of deformation functors, it is the lift that corresponds to the split and hence trivial extension class. So how do we actually check that this is an isomorphism? Like I said, it is kind of long and tedious. Roughly speaking you note that both deformation functors are prorepresentable by formally smooth objects of the same dimension. So we need to check that the differential is an isomorphism on tangent spaces.
Here’s where some cleverness happens. You rewrite the differential as a composition of a whole bunch of maps that you know are isomorphisms. In particular, it is the following string of maps: The Kodaira-Spencer map ${T\stackrel{\sim}{\rightarrow} H^1(E_0, \mathcal{T})}$ followed by Serre duality (recall the canonical is trivial on an elliptic curve) ${H^1(E_0, \mathcal{T})\stackrel{\sim}{\rightarrow} Hom_k(H^1(E_0, \Omega^1), H^1(E_0, \mathcal{O}_{E_0}))}$. The hardest one was briefly mentioned a few posts ago and is the dlog map which gives an isomorphism ${H^2_{fl}(E_0, \mu_{p^\infty})\stackrel{\sim}{\rightarrow} H^1(E_0, \Omega^1)}$.
Now noting that ${H^2_{fl}(E_0, \mu_{p^\infty})=\mathbb{Q}_p/\mathbb{Z}_p}$ and that ${T_0\mu_{p^\infty}\simeq H^1(E_0, \mathcal{O}_{E_0})}$ gives us enough compositions and isomorphisms that we get from the tangent space of the versal deformation of ${E_0}$ to the tangent space of the versal deformation of ${E_0[p^\infty]}$. As you might guess, it is a pain to actually check that this is the differential of the natural map (and in fact involves further decomposing those maps into yet other ones). It turns out to be the case and hence ${Def_{E_0}\rightarrow Def_{E_0[p^\infty]}}$ is an isomorphism and the canonical lift corresponds to the trivial extension.
But why should we care? It turns out the geometry of the canonical lift is very special. This may not be that impressive for elliptic curves, but this theory all goes through for any ordinary abelian variety or K3 surface where it is much more interesting. It turns out that you can choose a nice set of coordinates (“canonical coordinates”) on the base of the versal deformation and a basis of the de Rham cohomology of the family that is adapted to the Hodge filtration such that in these coordinates the Gauss-Manin connection has an explicit and nice form.
Also, the canonical lift admits a lift of the Frobenius which is also nice and compatible with how it acts on the above chosen basis on the de Rham cohomology. These coordinates are what give the base of the versal deformation the structure of a formal torus (product of ${\widehat{\mathbb{G}_m}}$‘s). One can then exploit all this nice structure to prove large open problems like the Tate conjecture in the special cases of the class of varieties that have these canonical lifts.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 746, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9501590132713318, "perplexity": 147.89497977988447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049277475.33/warc/CC-MAIN-20160524002117-00061-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://www.cfd-online.com/Wiki/Vorticity_transport_equation
|
# Vorticity transport equation
The vorticity transport equation governs the evolution of the vorticity. It is obtained by taking the curl of the momentum equation.
$\frac{\partial u_i}{\partial t} + u_j \frac{\partial u_i}{\partial x_j} + \frac{1}{\rho} \frac{\partial p}{\partial x_i} = \frac{1}{\rho}\frac{\partial \tau_{ij}}{\partial x_j}$
For incompressible flow the vorticity transport equation reduces to
$\frac{\partial \omega_i}{\partial t} + u_j \frac{\partial \omega_i}{\partial x_j} = \omega_j \frac{\partial u_i}{\partial x_j} + \nu \Delta \omega_i$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9624321460723877, "perplexity": 345.3145849872583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00461-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://www.nature.com/articles/nclimate1491?error=cookies_not_supported&code=5a935ee0-c656-4dc4-921b-7624b5056de3
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Response of corn markets to climate volatility under alternative energy futures
## Abstract
Recent price spikes1,2 have raised concern that climate change could increase food insecurity by reducing grain yields in the coming decades3,4. However, commodity price volatility is also influenced by other factors5,6, which may either exacerbate or buffer the effects of climate change. Here we show that US corn price volatility exhibits higher sensitivity to near-term climate change than to energy policy influences or agriculture–energy market integration, and that the presence of a biofuels mandate enhances the sensitivity to climate change by more than 50%. The climate change impact is driven primarily by intensification of severe hot conditions in the primary corn-growing region of the United States, which causes US corn price volatility to increase sharply in response to global warming projected to occur over the next three decades. Closer integration of agriculture and energy markets moderates the effects of climate change, unless the biofuels mandate becomes binding, in which case corn price volatility is instead exacerbated. However, in spite of the substantial impact on US corn price volatility, we find relatively small impact on food prices. Our findings highlight the critical importance of interactions between energy policies, energy–agriculture linkages and climate change.
This is a preview of subscription content, access via your institution
## Relevant articles
• ### The impact of 1.5 °C and 2.0 °C global warming on global maize production and trade
Scientific Reports Open Access 14 October 2022
• ### Statistically bias-corrected and downscaled climate models underestimate the adverse effects of extreme heat on U.S. maize yields
Communications Earth & Environment Open Access 20 September 2021
• ### Climate adaptation by crop migration
Nature Communications Open Access 06 March 2020
## Acknowledgements
We thank W. Schlenker for sharing his data and parameter estimates with us. We are grateful for insightful and constructive comments from participants in the International Agricultural Trade Consortium theme day and the Stanford Environmental Economics Seminar series. We thank NCEP for providing access to the NARR data set, and the PRISM Climate Group for providing access to the PRISM observational data set. We thank the Rosen Center for Advanced Computing (RCAC) at Purdue University and the Center for Computational Earth and Environmental Science (CEES) at Stanford University for access to computing resources. The research reported here was primarily supported by the US DOE, Office of Science, Office of Biological and Environmental Research, Integrated Assessment Research Program, Grant No. DE-SC005171, along with supplementary support from NSF award 0955283 and NIH award 1R01AI090159-01.
## Author information
Authors
### Contributions
N.S.D. designed and performed the climate modelling, designed the climate–yield–economic modelling approach, analysed the results and wrote the paper. T.W.H. designed the climate–yield–economic modelling approach, designed the economic modelling, analysed the results and wrote the paper. M.S. designed the climate–yield–economic modelling approach, performed the yield calculations and analysed the results. M.V. designed the climate–yield–economic modelling approach, performed the economic modelling, analysed the results and wrote the paper.
### Corresponding author
Correspondence to Noah S. Diffenbaugh.
## Ethics declarations
### Competing interests
The authors declare no competing financial interests.
## Rights and permissions
Reprints and Permissions
Diffenbaugh, N., Hertel, T., Scherer, M. et al. Response of corn markets to climate volatility under alternative energy futures. Nature Clim Change 2, 514–518 (2012). https://doi.org/10.1038/nclimate1491
• Accepted:
• Published:
• Issue Date:
• DOI: https://doi.org/10.1038/nclimate1491
• ### The impact of 1.5 °C and 2.0 °C global warming on global maize production and trade
• Kuo Li
• Jie Pan
• Tariq Ali
Scientific Reports (2022)
• ### Statistically bias-corrected and downscaled climate models underestimate the adverse effects of extreme heat on U.S. maize yields
• David C. Lafferty
• Ryan L. Sriver
• Robert E. Nicholas
Communications Earth & Environment (2021)
• ### Extreme climate events increase risk of global food insecurity and adaptation needs
• Tomoko Hasegawa
• Gen Sakurai
• Toshihiko Masui
Nature Food (2021)
• ### Climate adaptation by crop migration
• Lindsey L. Sloat
• Steven J. Davis
• Nathaniel D. Mueller
Nature Communications (2020)
• ### The Impact of International Crises on Maritime Transportation Based Global Value Chains
• Rodrigo Mesa-Arango
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23733489215373993, "perplexity": 8827.379202972563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446706285.92/warc/CC-MAIN-20221126080725-20221126110725-00583.warc.gz"}
|
http://link.springer.com/article/10.1007/s11064-011-0619-7?wt_mc=3rd%20party%20website.Other.BIO871-Increased%20Oxidative%20top%20cited%20Dec%2017
|
, Volume 37, Issue 2, pp 358-369
Date: 05 Oct 2011
# Increased Oxidative Damage and Decreased Antioxidant Function in Aging Human Substantia Nigra Compared to Striatum: Implications for Parkinson’s Disease
Rent the article at a discount
Rent now
* Final gross prices may vary according to local VAT.
## Abstract
Parkinson’s disease (PD) is characterized by selective degeneration and loss of dopaminergic neurons in the substantia nigra (SN) of the ventral mid brain leading to dopamine depletion in the striatum. Oxidative stress and mitochondrial damage have been implicated in the death of SN neurons during the evolution of PD. In our previous study on human PD brains, we observed that compared to SN, striatum was significantly protected against oxidative damage and mitochondrial dysfunction. To understand whether brain aging contributes to the vulnerability of midbrain to neurodegeneration in PD compared to striatum, we assessed the status of oxidant and antioxidant markers, glutathione metabolic enzymes, glial fibrillary acidic protein (GFAP) expression and mitochondrial complex I(CI) activity in SN (n = 23) and caudate nucleus (n = 24) during physiological aging in human brains. We observed a significant increase in protein oxidation (P < 0.001), loss of CI activity (P = 0.04) and increased astrocytic proliferation indicated by GFAP expression (P < 0.001) in SN compared to CD with increasing age. These changes were attributed to significant decrease in antioxidant function represented by superoxide dismutase (SOD) (P = 0.03), glutathione (GSH) peroxidase (GPx) (P = 0.02) and GSH reductase (GR) (P = 0.03) and a decreasing trend in total GSH and catalase with increasing age. However, these parameters were relatively unaltered in CD. We propose that SN undergoes extensive oxidative damage, loss of antioxidant and mitochondrial function and increased GFAP expression during physiological aging which might make it more vulnerable to neurotoxic insults thus contributing to selective degeneration during evolution of PD.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8116908669471741, "perplexity": 10015.485142337971}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507442420.22/warc/CC-MAIN-20141017005722-00165-ip-10-16-133-185.ec2.internal.warc.gz"}
|
http://www.computer.org/csdl/trans/tp/1990/04/i0321-abs.html
|
Subscribe
Issue No.04 - April (1990 vol.12)
pp: 321-344
ABSTRACT
<p>The general principles of detection, classification, and measurement of discontinuities are studied. The following issues are discussed: detecting the location of discontinuities; classifying discontinuities by their degrees; measuring the size of discontinuities; and coping with the random noise and designing optimal discontinuity detectors. An algorithm is proposed for discontinuity detection from an input signal S. For degree k discontinuity detection and measurement, a detector (P, Phi ) is used, where P is the pattern and Phi is the corresponding filter. If there is a degree k discontinuity at location t/sub 0/, then in the filter response there is a scaled pattern alpha P at t/sub 0/, where alpha is the size of the discontinuity. This reduces the problem to searching for the scaled pattern in the filter response. A statistical method is proposed for the approximate pattern matching. To cope with the random noise, a study is made of optimal detectors, which minimize the effects of noise.</p>
INDEX TERMS
discontinuities; computer vision; detection; classification; random noise; optimal discontinuity detectors; scaled pattern; statistical method; approximate pattern matching; computer vision; statistics
CITATION
D. Lee, "Coping with Discontinuities in Computer Vision: Their Detection, Classification, and Measurement", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.12, no. 4, pp. 321-344, April 1990, doi:10.1109/34.50620
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.866123616695404, "perplexity": 3083.393858060679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299121.41/warc/CC-MAIN-20150323172139-00223-ip-10-168-14-71.ec2.internal.warc.gz"}
|
https://verification.asmedigitalcollection.asme.org/GT/proceedings-abstract/GT2015/56659/V02CT42A016/237118
|
The performance of a compressor is known to be affected by the ingestion of liquid droplets. Heat, mass and momentum transfer as well as the droplet dynamics are some of the important mechanisms that govern the two-phase flow. This paper presents numerical investigations of three-dimensional two-phase flow in a two-stage centrifugal compressor, incorporating the effects of the above mentioned mechanisms. The results of the two-phase flow simulations are compared with the simulation involving only the gaseous phase. The implications for the compressor performance, viz. the pressure ratio, the power input and the efficiency are discussed. The role played by the droplet-wall interactions on the rate of vaporization, and on the compressor performance is also highlighted.
This content is only available via PDF.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9365805983543396, "perplexity": 448.0740419891523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571086.77/warc/CC-MAIN-20220809185452-20220809215452-00421.warc.gz"}
|
http://mathoverflow.net/questions/64725/satisfiable-polynomial-equations-for-given-free-coefficients
|
## satisfiable polynomial equations for given free coefficients
Let $F$ be a finite field, $n, k, m$ be natural numbers. I give you $m$ vectors $c^{(1)},\ldots,c^{(m)}\in F^n$. I ask for polynomials $p_1,\ldots,p_n$ on $k$ variables over $F$ such that the system of polynomial equations $p_i(t_1,\ldots,t_k)=c^{(j)}_i$ for $i=1,\ldots,n$ is satisfiable for every $1\leq j\leq m$.
Such polynomials can be found with degree $1$ if $k=n$: just take $p_i^{(j)}(t_1,\ldots,t_{k}) = t_i$. Can one find such polynomials when $k=n^{\epsilon}$ for a small $\epsilon>0$ and with degree depending only on $1/\epsilon$?
-
A simple observation: If $k = 1$ and $m > |F|$, then there are no solutions, because for all $p_1, \ldots, p_n \in F[t]$, $|\lbrace (p_1(t), \ldots, p_n(t)): t \in F \rbrace| \leq |F| < m$. By the same argument, in general there can be no solution if $m > |F|^k$. – auniket May 17 2011 at 14:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9237026572227478, "perplexity": 107.87396895663754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705543116/warc/CC-MAIN-20130516115903-00042-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://archive.numdam.org/item/CM_1986__57_1_81_0/
|
Classification of logarithmic Fano threefolds
Compositio Mathematica, Volume 57 (1986) no. 1, p. 81-125
@article{CM_1986__57_1_81_0,
author = {Maeda, Hironobu},
title = {Classification of logarithmic Fano threefolds},
journal = {Compositio Mathematica},
publisher = {Martinus Nijhoff Publishers},
volume = {57},
number = {1},
year = {1986},
pages = {81-125},
zbl = {0658.14019},
mrnumber = {817298},
language = {en},
url = {http://www.numdam.org/item/CM_1986__57_1_81_0}
}
Maeda, Hironobu. Classification of logarithmic Fano threefolds. Compositio Mathematica, Volume 57 (1986) no. 1, pp. 81-125. http://www.numdam.org/item/CM_1986__57_1_81_0/
[1] I.V. Dëmin: Fano 3-folds representable in the form of line bundles. Math. USSR Izvestija 17 (1981) 219-226. | Zbl 0536.14025
[2] I.V. Dëmin: Addemdum to the paper "Fano 3-folds representable in the form of line bundles". Math. USSR Izvestija 20 (1983) 625-626. | Zbl 0536.14026
[3] T. Fujita: On the structure of polarized varieties with Δ-genera zero. J. Fac. Sci. Univ. Tokyo 22 (1975) 103-115. | Zbl 0333.14004
[4] T. Fujita: On topological characterizations of complex projective spaces and affine linear spaces. Proc. Japan Acad. 56 (1980) 231-234. | MR 580087 | Zbl 0453.14008
[5] R. Hartshorne: Algebraic Geometry. GTM 52, Springer (1977). | MR 463157 | Zbl 0367.14001
[6] S. Iitaka: Algebraic Geometry: An Introduction to Birational Geometry of Algebraic Varieties. GTM 76, Springer (1981). | MR 637060 | Zbl 0491.14006
[7] S. Iitaka: Birational Geometry for Open Algebraic Varieties. Montreal UP 76 (1981). | MR 647148 | Zbl 0491.14005
[8] V.A. Iskovskih: Anticanonical models of three dimensional algebraic varieties. J. Soviet Math. 13 (1980) 815-868. | Zbl 0428.14016
[9] V.A. Iskovskih: Treedimensional Fano varities. I, II. Math. USSR Izvestija 11 (1977) 485-527, | Zbl 0382.14013
ibid., 12 (1978) 469-509.
[10] K. Kodaira: On stability of compact submanifolds of complex manifolds. Amer. J. Math. 85 (1963) 79-94. | MR 153033 | Zbl 0173.33101
[11] S.L. Kleiman: Toward a numberical theory of ampleness. Ann. of Math. 84 (1966) 293-344. | MR 206009 | Zbl 0146.17001
[12] H. Maeda: On certain ampleness criteria for divisors on threefolds. Preprint (1983).
[13] Y.I. Manin: Cubic Forms: Algebra, Geometry, Arithmetic. North-Holland (1974). | MR 833513 | Zbl 0277.14014
[14] M. Miyanishi: Algebraic methods in the theory of algebraic threefolds. In: Algebraic Varieties and Analytic Varities, Advanced Studies in Pure Mathematics. 1. North-Holland Kinokuniya (1983). | MR 715647 | Zbl 0537.14027
[15] S. Mori: Threefolds whose canonical bundles are not numerically effectie. Ann. of Math. 116 (1982) 133-176. | MR 662120 | Zbl 0557.14021
[16] S. Mori and S. Mukai: Classifications of 3-folds with B2 ≽ 2. Manuscripta Math. 36 (1981) 147-162. | Zbl 0478.14033
[17] S. Mori and S. Mukai: On Fano 3-folds with B2 ≽ 2. In: Algebraic Varieties and Analytic Varieties, Advanced Studies in Pure Math. 1. North-Holland Kinokuniya (1983). | Zbl 0537.14026
[18] J.P. Murre: Classification of Fano threefolds according to Fano and Iskovskih. In: Algebraic Threefolds: Lect. Notes in Math. 947. Springer (1981). | MR 672614 | Zbl 0492.14025
[19] Y. Norimatsu: Kodaira vanishing theorem and Chern classes for ∂-manifolds. Proc. J. Acad. 54 (1978) 107-108. | Zbl 0433.32013
[20] N. Nygaard: On the fundamental group of uni-rational 3-fold. Invent. math. 44 (1978) 75-86. | MR 491731 | Zbl 0427.14014
[21] M. Reid: Lines on Fano 3-folds according to Shokurov. Mittag-Leffler Report No. 11 (1980).
[22] S. Tsunoda: Open surfaces of logarithmic Kodaira dimension - ∞. Talk at Seminar on Analytic Manifolds, Univ. of Tokyo (1981).
[23] S. Tsunoda and M. Miyanishi: The structure of open algebraic surfaces. II. In : Classification of Algebraic and Analytic Manifolds. Progress in Mathematics39. Birkäuser (1983). | MR 728617 | Zbl 0605.14035
[24] H. Umemura and S. Mukai: Minimal rational threefolds. In: Algebraic Geometry. Lect. Notes in Math. 1016. Springer (1983) | MR 726439 | Zbl 0526.14006
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4807853698730469, "perplexity": 5810.044122474666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671363.79/warc/CC-MAIN-20191122143547-20191122172547-00459.warc.gz"}
|
http://www.koreascience.or.kr/search.page?keywords=asymmetric+channel
|
• Title, Summary, Keyword: asymmetric channel
### A 6.4-Gb/s/channel Asymmetric 4-PAM Transceiver for Memory Interface
• Lee, Kwang-Hun;Jang, Young-Chan
• Proceedings of the Korean Institute of Information and Commucation Sciences Conference
• /
• /
• pp.129-131
• /
• 2011
• An 6.4-Gb/s/channel 4-PAM transceiver is designed for a high speed memory application. The asymmetric 4-PAM signaling scheme is proposed to increase the voltage and time margins, and reduces the reference noise effect in a receiver by 33%. To reduce ISI in a channel, 1-tap pre-emphasis of a transmitter is used. The proposed asymmetric 4-PAM transceiver was implemented by using 0.13um 1-poly 6-metal CMOS process with 1.2V supply. The active area and power consumption of 1-charmel transceiver including a PLL are $0.294um^2$ and 6mW, respectively.
### Channel Doping Concentration Dependent Threshold Voltage Movement of Asymmetric Double Gate MOSFET (비대칭 이중게이트 MOSFET의 도핑농도에 대한 문턱전압이동)
• Jung, Hakkee
• Journal of the Korea Institute of Information and Communication Engineering
• /
• v.18 no.9
• /
• pp.2183-2188
• /
• 2014
• This paper has analyzed threshold voltage movement for channel doping concentration of asymmetric double gate(DG) MOSFET. The asymmetric DGMOSFET is generally fabricated with low doping channel and fully depleted under operation. Since impurity scattering is lessened, asymmetric DGMOSFET has the adventage that high speed operation is possible. The threshold voltage movement, one of short channel effects necessarily occurred in fine devices, is investigated for the change of channel doping concentration in asymmetric DGMOSFET. The analytical potential distribution of series form is derived from Possion's equation to obtain threshold voltage. The movement of threshold voltage is investigated for channel doping concentration with parameters of channel length, channel thickness, oxide thickness, and doping profiles. As a result, threshold voltage increases with increase of doping concentration, and that decreases with decrease of channel length. Threshold voltage increases with decrease of channel thickness and bottom gate voltage. Lastly threshold voltage increases with decrease of oxide thickness.
### A Partial Response Maximum Likelihood Detection Using Modified Viterbi Decoder for Asymmetric Optical Storage Channels
• Lee, Kyu-Suk;Lee, Joo-Hyun;Lee, Jae-Jin
• The Journal of Korean Institute of Communications and Information Sciences
• /
• v.30 no.7C
• /
• pp.642-646
• /
• 2005
• We propose an improved partial response maximum likelihood (PRML) detector with the branch value compensation of Viterbi decoder for asymmetric high-density optical channel. Since the compensation value calculated by a survival path is applied to each branch metric, it reduces the detection errors by the asymmetric channel. The proposed PRML detection scheme improves the detection performance on the $2^{nd},\;3^{rd}\;and\;4^{th}$ order PR targets for asymmetric optical recording channel.
### Narrow Channel Formation Using Asymmetric Halftone Exposure with Conventional Photolithography
• Cheon, Ki-Cheol;Woo, Ju-Hyun;Jung, Deuk-Soo;Park, Mun-Gi;Kim, Hwan;Lim, Byoung-Ho;Yu, Sang-Jean
• 한국정보디스플레이학회:학술대회논문집
• /
• /
• pp.258-260
• /
• 2008
• Developed halftone exposure technique was successfully applied to the fabrication of narrow transistor channels below $4\;{\mu}m$ with conventional photolithography method. Asymmetric slits concept of photo mask was applied to make channel lengths (L) shorter for thin film transistor's (TFT) high performance. These short channel TFTs verified better quality transistor characteristics.
### Analytical Model for the Threshold Voltage of Long-Channel Asymmetric Double-Gate MOSFET based on Potential Linearity (전압분포의 선형특성을 이용한 Long-Channel Asymmetric Double-Gate MOSFET의 문턱전압 모델)
• Yang, Hee-Jung;Kim, Ji-Hyun;Son, Ae-Ri;Kang, Dae-Gwan;Shin, Hyung-Soon
• Journal of the Institute of Electronics Engineers of Korea SD
• /
• v.45 no.2
• /
• pp.1-6
• /
• 2008
• A compact analytical model of the threshold voltage for long-channel Asymmetric Double-Gate(ADG) MOSFET is presented. In contrast to the previous models, channel doping and carrier quantization are taken into account. A more compact model is derived by utilizing the potential distribution linearity characteristic of silicon film at threshold. The accuracy of the model is verified by comparisons with numerical simulations for various silicon film thickness, channel doping concentration and oxide thickness.
### Dependence of Drain Induced Barrier Lowering for Ratio of Channel Length vs. Thickness of Asymmetric Double Gate MOSFET (비대칭 DGMOSFET에서 채널길이와 두께 비에 따른 DIBL 의존성 분석)
• Jung, Hakkee
• Journal of the Korea Institute of Information and Communication Engineering
• /
• v.19 no.6
• /
• pp.1399-1404
• /
• 2015
• This paper analyzed the phenomenon of drain induced barrier lowering(DIBL) for the ratio of channel length vs. thickness of asymmetric double gate(DG) MOSFET. DIBL, the important secondary effect, is occurred for short channel MOSFET in which drain voltage influences on potential barrier height of source, and significantly affects on transistor characteristics such as threshold voltage movement. The series potential distribution is derived from Poisson's equation to analyze DIBL, and threshold voltage is defined by top gate voltage of asymmetric DGMOSFET in case the off current is 10-7 A/m. Since asymmetric DGMOSFET has the advantage that channel length and channel thickness can significantly minimize, and short channel effects reduce, DIBL is investigated for the ratio of channel length vs. thickness in this study. As a results, DIBL is greatly influenced by the ratio of channel length vs. thickness. We also know DIBL is greatly changed for bottom gate voltage, top/bottom gate oxide thickness and channel doping concentration.
### Threshold Voltage Movement for Channel Doping Concentration of Asymmetric Double Gate MOSFET (도핑농도에 따른 비대칭 이중게이트 MOSFET의 문턱전압이동현상)
• Jung, Hakkee;Lee, jongin;Jeong, Dongsoo
• Proceedings of the Korean Institute of Information and Commucation Sciences Conference
• /
• /
• pp.748-751
• /
• 2014
• This paper has analyzed threshold voltage movement for channel doping concentration of asymmetric double gate(DG) MOSFET. The asymmetric DGMOSFET is generally fabricated with low doping channel and fully depleted under operation. Since impurity scattering is lessened, asymmetric DGMOSFET has the adventage that high speed operation is possible. The threshold voltage movement, one of short channel effects necessarily occurred in fine devices, is investigated for the change of channel doping concentration in asymmetric DGMOSFET. The analytical potential distribution of series form is derived from Possion's equation to obtain threshold voltage. The movement of threshold voltage is investigated for channel doping concentration with parameters of channel length, channel thickness, oxide thickness, and doping profiles. As a result, threshold voltage increases with increase of doping concentration, and that decreases with decrease of channel length. Threshold voltage increases with decrease of channel thickness and bottom gate voltage. Lastly threshold voltage increases with decrease of oxide thickness.
### Influence of Tunneling Current on Threshold voltage Shift by Channel Length for Asymmetric Double Gate MOSFET (비대칭 DGMOSFET에서 터널링 전류가 채널길이에 따른 문턱전압이동에 미치는 영향)
• Jung, Hakkee
• Journal of the Korea Institute of Information and Communication Engineering
• /
• v.20 no.7
• /
• pp.1311-1316
• /
• 2016
• This paper analyzes the influence of tunneling current on threshold voltage shift by channel length of short channel asymmetric double gate(DG) MOSFET. Tunneling current significantly increases by decrease of channel length in the region of 10 nm below, and the secondary effects such as threshold voltage shift occurs. Threshold voltage shift due to tunneling current is not negligible even in case of asymmetric DGMOSFET to develop for reduction of short channel effects. Off current consists of thermionic and tunneling current, and the ratio of tunneling current is increasing with reduction of channel length. The WKB(Wentzel-Kramers-Brillouin) approximation is used to obtain tunneling current, and potential distribution in channel is hermeneutically derived. As a result, threshold voltage shift due to tunneling current is greatly occurred for decreasing of channel length in short channel asymmetric DGMOSFET. Threshold voltage is changing according to bottom gate voltages, but threshold voltage shifts is nearly constant.
### Device Optimization of N-Channel MOSFETs with Lateral Asymmetric Channel Doping Profiles
• Baek, Ki-Ju;Kim, Jun-Kyu;Kim, Yeong-Seuk;Na, Kee-Yeol
• Transactions on Electrical and Electronic Materials
• /
• v.11 no.1
• /
• pp.15-19
• /
• 2010
• In this paper, we discuss design considerations for an n-channel metal-oxide-semiconductor field-effect transistor (MOSFET) with a lateral asymmetric channel (LAC) doping profile. We employed a $0.35\;{\mu}m$ standard complementary MOSFET process for fabrication of the devices. The gates to the LAC doping overlap lengths were 0.5, 1.0, and $1.5\;{\mu}m$. The drain current ($I_{ON}$), transconductance ($g_m$), substrate current ($i_{SUB}$), drain to source leakage current ($i_{OFF}$), and channel-hot-electron (CHE) reliability characteristics were taken into account for optimum device design. The LAC devices with shorter overlap lengths demonstrated improved $I_{ON}$ and $g_m$ characteristics. On the other hand, the LAC devices with longer overlap lengths demonstrated improved CHE degradation and $I_{OFF}$ characteristics.
### Tunneling Current of Sub-10 nm Asymmetric Double Gate MOSFET for Channel Doping Concentration (10 nm 이하 비대칭 DGMOSFET의 채널도핑농도에 따른 터널링 전류)
• Jung, Hakkee
• Journal of the Korea Institute of Information and Communication Engineering
• /
• v.19 no.7
• /
• pp.1617-1622
• /
• 2015
• This paper analyzes the ratio of tunneling current for channel doping concentration of sub-10 nm asymmetric double gate(DG) MOSFET. The ratio of tunneling current for off current in subthreshold region increases in the region of channel length of 10 nm below. Even though asymmetric DGMOSFET is developed to reduce short channel effects, the increase of tunneling current in sub-10 nm is inevitable. As the ratio of tunneling current in off current according to channel doping concentration is calculated in this study, the influence of tunneling current to occur in short channel is investigated. To obtain off current to consist of thermionic emission and tunneling current, the analytical potential distribution is obtained using Poisson equation and tunneling current using WKB(Wentzel-Kramers-Brillouin). As a result, tunneling current is greatly changed for channel doping concentration in sub-10 nm asymmetric DGMOSFET, specially with parameters of channel length, channel thickness, and top/bottom gate oxide thickness and voltage.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8305723071098328, "perplexity": 8477.563136449453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066981.0/warc/CC-MAIN-20210416130611-20210416160611-00302.warc.gz"}
|
http://mathoverflow.net/questions/6974/historical-question-cauchy-crofton-theorem-vs-radon-transform?answertab=oldest
|
# Historical question Cauchy-Crofton theorem vs. Radon transform
The Radon transform apparently was discovered around 1917 if Wikipedia is to be believed. The Cauchy-Crofton theorem is a much older theorem (mid 19th-century). But both ideas are more or less the same.
Did Radon consider his transform as a generalization of the Cauchy-Crofton theorem? Did he not know about the Cauchy-Crofton theorem?
http://en.wikipedia.org/wiki/Crofton_formula
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9659988284111023, "perplexity": 870.1131106742804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098849.37/warc/CC-MAIN-20150627031818-00200-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/one-more-quantum-matrix-question.46672/
|
# One more Quantum Matrix question
1. Oct 8, 2004
### Ed Quanta
Let A be a Hermitian nxn matrix. Let the column vectors of the nxn matrix S be comprised of the orthnormalized eigenvectors of A
Again, Sinv is the inverse of S
a) Show that S is unitary
b) Show that Sinv(A)S is a diagonal matrix comrpised of the eigenvalues of A
No idea how to start this one off.
2. Oct 8, 2004
### Wong
a) U is a unitary matrix <=> U*U = I, where "*" denotes conjugate transpose <=> $$\sum_{j} u_{ji}^{*}u_{jk} = \delta_{ik}$$ <=> $$u_{i}^{*}u_{k}=\delta_{ik}$$, where $$u_{i}$$ is the ith column of U. The last relation implies orthogonality of columns of U.
b)This one needs a little thought. If u is an eigenvector of A, then $$Au=\lambda u$$. Then what is AS? Remember that each column of S is just an eigenvector of A. Also note that Sinv*S=I.
Last edited: Oct 8, 2004
3. Oct 9, 2004
### Ed Quanta
Sorry, I am still not sure how to find AS without knowing the eigenvectors of A.
4. Oct 9, 2004
### Wong
First try to think about what you want to prove. That is, $$S^{-1}AS=D$$, where D is a diagonal matrix. This is equivalent to proving AS=DS, where D is diagonal. Now each column of S is an eigenvector of A. So A acting on S should produce something quite simple. (Try to think of what is the defining eigenvalue equation for A.) May you put the result in the form DS, where D is a diagonal matrix?
5. Mar 6, 2005
### erraticimpulse
Wong Wrong
I have doubts that either of you guys will read this anytime soon. I had this same problem and the conclusion that Wong tried to provide is incorrect. Instead of $$AS=DS$$ it's actually $$AS=SD$$. The product DS will produce the correct entries along the diagonal but false elsewhere (really think about what you're doing here). But if you use the produce SD it will provide the correct eigenvalue for every eigenvector.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9880890250205994, "perplexity": 774.0007984966481}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170425.26/warc/CC-MAIN-20170219104610-00263-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/925518/proof-of-taylors-theorem-with-wirtinger-derivatives-complex-coordinates
|
# Proof of Taylor's Theorem with Wirtinger Derivatives (Complex coordinates)
Suppose that $f$, defined in $D_1(0)$, is infinitely differentiable. Show that for each $n \in \mathbb{N}$ we have \begin{equation*} f(z,\bar{z}) = \sum\limits_{0 \leq j + k \leq n} \frac{\partial_z^j\partial_{\bar{z}}^kf(0,0) }{j!k!}z^j\bar{z}^k + \mathcal{O}(|z|^{n+1}). \end{equation*}
I've tried to expand Taylor's theorem for reals to get this result, but everything I've tried has worked out badly. I'm sure there's an elegant way to do this that I'm just not seeing. This IS a homework problem, so feel free to give partial solutions/hints if you prefer.
Edit: My starting point was that we know: \begin{equation*} f(x,y) = \sum\limits_{0 \leq j + k \leq n} \frac{\partial_x^j\partial_{y}^kf(0,0) }{j!k!}x^jy^k + \mathcal{O}(\sqrt{x^2 + y^2}^{n+1}). \end{equation*} from Taylor's theorem for two variables. It's likely possible to get one from the other from the extremely ugly, brute-force method of substituting in $z = x + iy,~\bar{z} = x - iy$ and $\partial_z = \frac{1}{2}(\partial_x - i\partial_y),~\partial_{\bar{z}} = \frac{1}{2}(\partial_x + i\partial_y)$. My gut feeling tells me that there must be a better way to solve this problem then that. I just can't figure it out. Any help would be extremely appreciated.
Starting from the Taylor formula for functions of a real variable,
$$g(x) = \sum_{k=0}^n \frac{g^{(k)}(0)}{k!}x^k + \frac{1}{n!}\int_0^x (x-t)^n \cdot g^{(n+1)}(t)\,dt,$$
we can obtain the result by considering $g_\varphi(r) = f(re^{i\varphi},re^{-i\varphi})$ and expressing the derivatives of $g_\varphi$ in terms of the Wirtinger derivatives of $f$.
Inductively, we have
\begin{align} g_\varphi^{(k+1)}(t) &= \frac{\partial}{\partial t} g_\varphi^{(k)}(t)\\ &= \frac{\partial}{\partial t} \sum_{m=0}^k \binom{k}{m} \partial_z^m\partial_{\overline{z}}^{k-m}f(te^{i\varphi},te^{-i\varphi})e^{im\varphi}e^{-i(k-m)\varphi}\\ &= \sum_{m=0}^k \binom{k}{m} \partial_z^{m+1}\partial_{\overline{z}}^{k-m}f(te^{i\varphi},te^{-i\varphi})e^{i(m+1)\varphi}e^{-i(k-m)\varphi}\\ &\quad + \sum_{m=0}^k \binom{k}{m} \partial_z^m \partial_{\overline{z}}^{k+1-m}f(te^{i\varphi},te^{-i\varphi})e^{im\varphi}e^{-i(k+1-m)\varphi}\\ &= \sum_{m=0}^{k+1} \binom{k}{m-1} \partial_z^m\partial_{\overline{z}}^{k+1-m}f(te^{i\varphi},te^{-i\varphi})e^{im\varphi}e^{-i(k+1-m)\varphi}\\ &\quad + \sum_{m=0}^{k+1} \binom{k}{m} \partial_z^m\partial_{\overline{z}}^{k+1-m}f(te^{i\varphi},te^{-i\varphi})e^{im\varphi}e^{-i(k+1-m)\varphi}\\ &= \sum_{m=0}^{k+1}\binom{k+1}{m} \partial_z^m\partial_{\overline{z}}^{k+1-m}f(te^{i\varphi},te^{-i\varphi})e^{im\varphi}e^{-i(k+1-m)\varphi}\\ \end{align}
by the chain rule just like for the real partial derivatives $\partial_x,\,\partial_y$, and so for $z = \lvert z\rvert e^{i\varphi}$ we obtain
\begin{align} f(z,\overline{z}) &= g_\varphi(\lvert z\rvert)\\ &= \sum_{k=0}^n \frac{g_\varphi^{(k)}(0)}{k!}\lvert z\rvert^k + \underbrace{\frac{1}{n!}\int_0^{\lvert z\rvert} (\lvert z\rvert-t)^n g_\varphi^{(n+1)}(t)\,dt}_{R_n(z,\overline{z})}\\ &= \sum_{j+m\leqslant n} \frac{\partial_z^j\partial_{\overline{z}}^m f(0,0)}{j!m!} e^{ij\varphi}e^{-im\varphi}\lvert z\rvert^{j+m} + R_n(z,\overline{z})\\ &= \sum_{j+m\leqslant n} \frac{\partial_z^j\partial_{\overline{z}}^m f(0,0)}{j!m!}z^j\overline{z}^m + R_n(z,\overline{z}), \end{align}
with
\begin{align} \lvert R_n(z,\overline{z})\rvert &= \frac{1}{n!} \left\lvert \int_0^{\lvert z\rvert} (\lvert z\rvert-t)^n g_\varphi^{(n+1)}(t)\,dt\right\rvert\\ &\leqslant \frac{1}{n!}\sum_{j=0}^{n+1}\binom{n+1}{j}\int_0^{\lvert z\rvert} (\lvert z\rvert-t)^n \left\lvert \partial_z^j\partial_{\overline{z}}^{n+1-j}f(te^{i\varphi},te^{-i\varphi})\right\rvert\,dt\\ &\leqslant \left(\sum_{j=0}^{n+1} \frac{\lVert \partial_z^j\partial_{\overline{z}}^{n+1-j} f\rVert_{R}}{j!(n+1-j)!}\right)\lvert z\rvert^{n+1} \end{align}
where $R$ is arbitrary between $\lvert z\rvert$ and $1$, and $\lVert h\rVert_R = \sup \{ \lvert h(z,\overline{z})\rvert : \lvert z\rvert \leqslant R\}$.
Note: we cannot have a bound $C\cdot \lvert z\rvert^{n+1}$ for the remainder term uniformly on all of $D_1(0)$, since $f$ could be unbounded on the disk, but a polynomial always is bounded on bounded subsets of $\mathbb{C}$. We can only expect to have for every compact $K\subset D_1(0)$ a constant $C_K$ such that $\lvert R_n(z,\overline{z})\rvert \leqslant C_K\cdot \lvert z\rvert^{n+1}$ holds for all $z\in K$. The expression with the $\lVert\cdot\rVert_R$ gives exactly that.
You may have noticed that the proof is exactly like the/a standard proof of the Taylor formula for a function of several (in this case two) real variables. The point is the formula for the higher derivatives of $g_\varphi$, which matches exactly the formula for the derivatives expressed in terms of the real partial derivatives. That they behave just like true partial derivatives in many ways (chain rule, product rule, ...) makes the Wirtinger derivatives useful.
So, you know there is a polynomial $P$ of degree $\le n$ such that $$f(x,y) = P(x,y) + \mathcal{O}((x^2 + y^2)^{(n+1)/2}) \tag{1}$$ Note that the error term has all derivatives of orders $\le n$ vanishing at the origin.
Plug $x=(z+\bar z)/2$ and $y=(z-\bar z)/(2i)$ in (1). Treating $z$ and $\bar z$ as abstract variables for the moment, observe that this is a linear invertible change of variables: a polynomial becomes another polynomial $Q$ of same degree. So, $$f(z) = Q(z,\bar z) + \mathcal{O}(|z|^{n+1}) \tag{2}$$ As before, the error term has all derivatives of orders $\le n$ vanishing at the origin. Assuming as known that $$\frac{\partial }{\partial z}(z^m \bar z^n)=mz^{m-1} \bar z^n,\qquad \frac{\partial }{\partial \bar z}(z^m \bar z^n)=nz^{m} \bar z^{n-1} \tag{3}$$ we find that the coefficients of $Q$ are what is claimed by taking derivatives on both sides and evaluating them at $0$.
One way to prove (3) is to
• check that Wirtinger derivatives satisfy the product rule (easy, since they are just the sum of two things that satisfy it)
• check that $\frac{\partial }{\partial z} z =1$, $\frac{\partial }{\partial z} \bar z =0$, $\frac{\partial }{\partial \bar z} z =0$, $\frac{\partial }{\partial \bar z} \bar z =1$. (Something that should be done to motivate said derivatives, anyway.)
• Thanks for the comment, but I'm having a bit of trouble understanding what you mean by "we find that the coefficients of $Q$ are what is claimed by taking derivatives on both sides and evaluating them at $0$." What exactly do you mean by that? – user165388 Sep 12 '14 at 20:03
• Say, you differentiated both sides of (2) twice in $z$ and three times in $\bar z$. Then on the right you have a polynomial where every monomial lost two factors of $z$ and three factors of $\bar z$ (and gained some coefficient). Now when we plug $0$ in there, the only monomial that survives is the one that came from $z^2\bar z^3$. This gives a relation between the derivatives of $f$ and coefficients of $Q$. – user147263 Sep 12 '14 at 21:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9975202083587646, "perplexity": 272.34263652063356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316549.78/warc/CC-MAIN-20190821220456-20190822002456-00537.warc.gz"}
|
https://www.physicsforums.com/threads/normal-force-at-the-bottom-of-a-ferris-wheel.216271/
|
# Homework Help: Normal Force at the bottom of a Ferris Wheel
1. Feb 18, 2008
### AnkhUNC
1. The problem statement, all variables and given/known data
A student of weight 678 N rides a steadily rotating Ferris wheel (the student sits upright). At the highest point, the magnitude of the normal force N on the student from the seat is 565 N. (a) What is the magnitude of N at the lowest point? If the wheel's speed is doubled, what is the magnitude FN at the (b) highest and (c) lowest point?
2. Relevant equations
3. The attempt at a solution
So M = 678N, NTop = 565N. Fc = mg - Ntop = 6079.4
So Nbottom = Nbottom - mg = 6079.4 which leads Nbottom to = 12723.8 but this is incorrect. Where am I going wrong?
2. Feb 18, 2008
### Staff: Mentor
At the top, normal force, weight, and acceleration all point down:
N + mg = mv^2/r; so N = mv^2/r - mg
At the bottom, normal force and acceleration point up, but weight points down:
N - mg = mv^2/r; so N = mv^2/r + mg
3. Feb 18, 2008
### AnkhUNC
I really don't need all that though do I? If I do how am I going to solved for v^2 or r? I only have one equation and two unknowns. At best I'd have Ntop+Nbottom = mv^2/r.
4. Feb 18, 2008
### Staff: Mentor
Yep. It's the easy way!
No need to solve for those.
Examining those expressions for N, how does Nbottom compare to Ntop? (Hint: What's Nbottom - Ntop?)
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8536773920059204, "perplexity": 2870.481053918855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746110.52/warc/CC-MAIN-20181119192035-20181119214035-00145.warc.gz"}
|
https://physics.aps.org/synopsis-for/10.1103/PhysRevB.80.224517
|
# Synopsis: Dirac connection
Ballistic electron transport through a clean superconductor with d-wave symmetry has features in common with graphene.
In response to a voltage, the electrical current in a pure sheet of graphene diminishes as $1/L$, where $L$ is the length over which the current is transmitted. This form of scaling, called pseudodiffusive because of its similarity to diffusion in a random potential, occurs when $L$ is less than the width of the sheet and the mean free path.
In graphene, pseudodiffusion occurs because the electrons behave like massless Dirac fermions. Now, in a paper appearing in Physical Review B, János Asbóth and collaborators at Leiden University in the Netherlands calculate the transmission of electrons and holes between two normal-metal electrodes, separated over a distance $L$ by a clean d-wave superconductor. Asbóth et al. find that the transmitted electrical and thermal currents both have the pseudodiffusive $1/L$ scaling characteristic of massless Dirac fermions—regardless of the presence of tunnel barriers at the metal-superconductor interfaces—as long as $L$ is larger than the superconducting coherence length and smaller than the width of the superconductor and the mean free path. This occurs because the d-wave superconductor forms ballistic conduction channels for coupled electron-hole excitations that are described by an anisotropic two-dimensional Dirac equation analogous to that of graphene. This finding is likely to spur experimental efforts to search for pseudodiffusive transmission in clean single crystals of high-$T{}_{c}$ cuprates. – Sarma Kancharla
More Features »
### Announcements
More Announcements »
## Previous Synopsis
Particles and Fields
Read More »
Cosmology
Read More »
## Related Articles
Condensed Matter Physics
### Viewpoint: Cuprate Superconductors May Be Conventional After All
Experiments on a copper-based high-temperature superconductor uncover the existence of vortex states—a hallmark of conventional superconductivity. Read More »
Condensed Matter Physics
### Viewpoint: Order on Command
A current of electrons with aligned spins can be used to modify magnetic order and superconductivity in an iron-based superconductor. Read More »
Optics
### Synopsis: Photons Couple Like Cooper Pairs
A pairing of photons—similar to the pairing of electrons in superconductors—can occur when light scatters in a transparent medium. Read More »
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7476785778999329, "perplexity": 1924.9693908052268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891980.75/warc/CC-MAIN-20180123151545-20180123171545-00048.warc.gz"}
|
https://api.philpapers.org/s/Moshe%20Leshno
|
## Results for 'Moshe Leshno'
468 found
Order:
1. Do Urea Breath Test (UBT) Referrals for Helicobacter Pylori Testing Match the Clinical Guidelines in Primary Care Practice? A Prospective Observational Study.Horowitz Noya, Beit-Or Anat, Leshno Moshe, Polishchouk Gennady, Halpern Zamir & Moshkowitz Menachem - 2008 - Journal of Evaluation in Clinical Practice 14 (5):799-802.
Export citation
Bookmark 1 citation
2. Helping Patients and Physicians Reach Individualized Medical Decisions: Theory and Application to Prenatal Diagnostic Testing. [REVIEW]Edi Karni, Moshe Leshno & Sivan Rapaport - 2014 - Theory and Decision 76 (4):451-467.
This paper presents a procedure designed to aid physicians and patients in the process of making medical decisions, and illustrates its implementation to aid pregnant women, who decided to undergo prenatal diagnostic test choose a physician to administer it. The procedure is based on a medical decision-making model of Karni (J Risk Uncertain 39: 1–16, 2009). This model accommodates the possibility that the decision maker’s risk attitudes may vary with her state of health and incorporates other costs, such as pain (...)
Export citation
Bookmark
3. The Mesillas Yesharim / Path of the Just was the masterpiece of Rabbi Moshe Chaim Luzzatto, the great mystic, philosopher, sage, and saint. For centuries, this classic text for better living has been every man's primer for ideal life. Wherever there were Jews there was a well-thumbed Mesillas Yesharim. In this book, Rabbi Twerski applies the text and themes of Mesillas Yesharim to the everyday challenges of the 90s. He shows us how we can succeed in the quest for (...)
Export citation
Bookmark
4. Review: Ronald Fagin, Moshe Y. Vardi, Knowledge and Implicit Knowledge in a Distributed Environment: Preliminary Report.William J. Rapaport, Ronald Fagin & Moshe Y. Vardi - 1988 - Journal of Symbolic Logic 53 (2):667.
Export citation
Bookmark
5. Moshe Idel, Cabalistii nocturni/ Nocturnal Kabbalists.Sandu Frunza - 2005 - Journal for the Study of Religions and Ideologies 4 (10):239-240.
Moshe Idel, Cabalistii nocturni Traducere de Ana-Elena Ilinca, Ed. Provopress, Cluj-Napoca, 2005.
No categories
Export citation
Bookmark
Export citation
Bookmark
7. Relativity: Modern Large-Scale Spacetime Structure of the Cosmos.Moshe Carmeli, Stuart I. Fickler & Louis Witten (eds.) - 1970 - New York: Plenum Press.
This book describes Carmeli's cosmological general and special relativity theory, along with Einstein's general and special relativity. These theories are discussed in the context of Moshe Carmeli's original research, in which velocity is introduced as an additional independent dimension. Four- and five-dimensional spaces are considered, and the five-dimensional braneworld theory is presented. The Tully-Fisher law is obtained directly from the theory, and thus it is found that there is no necessity to assume the existence of dark matter in the (...)
Export citation
Bookmark 1 citation
8. Moshe Idel's Contribution to the Study of Religion.Jonathan Garb - 2007 - Journal for the Study of Religions and Ideologies 6 (18):16-29.
The article discusses the contribution of Moshe Idel’s vast research to the field of religious studies. The terms which best capture his overall approach are “plurality” and “complexity”. As a result, Idel rejects essentialist definitions of “Judaism”, or any other religious tradition. The ensuing question is: to what extent does his approach allow for the characterization of Judaism as a singular phenomenon which can be differentiated from other religions? The answer seems to lie in Idel’s definition of the “connectivity” (...)
Export citation
Bookmark
9. Society in Flux and Changing Values.Moshe Amon - 1970 - Southern Journal of Philosophy 8 (1):45-48.
No categories
Export citation
Bookmark
10. Moshe Idel's Books Published in European Languages.Jsri Editorial Team - 2007 - Journal for the Study of Religions and Ideologies 6 (18):3-5.
Export citation
Bookmark
11. Moshe Idel, Hasidism între extaz si magie/ Hasidism between Ecstasy and Magic.Petru Moldovan - 2003 - Journal for the Study of Religions and Ideologies 2 (4):193-196.
Moshe Idel, Hasidism între extaz si magie Ed. Hasefer, Bucuresti, 2001.
Export citation
Bookmark
12. Moshe Idel, Golem.Petru Moldovan - 2004 - Journal for the Study of Religions and Ideologies 3 (9):176-177.
Moshe Idel, Golem Ed. Hasefer, Bucuresti, 2003. Traducere de Rola Mahler-Beilis.
Export citation
Bookmark
13. Moshe Halbertal, Ezoterism si exoterism. Restrictiile misterului in traditia iudaica/ Concealment and Revelation.Petru Moldovan - 2004 - Journal for the Study of Religions and Ideologies 3 (9):171-172.
Moshe Halbertal, Ezoterism si exoterism. Restrictiile misterului in traditia iudaica Ed. Limes, Cluj-Napoca, 2004, traducere din limba engleza de Roxana Havrici.
No categories
Export citation
Bookmark
14. A Critical Return to Moshe Idel's Kabbalah: New Perspectives: An Appreciation.Daniel Abrams - 2007 - Journal for the Study of Religions and Ideologies 6 (18):30-40.
The publication of Moshe Idel’s book, Kabbalah: New Perspectives marks a turning point in the field of Jewish mysticism. In this volume, Moshe Idel offered phenomenology as an alternative key to appreciating the history and ideas of Jewish mystical traditions. This study returns to this book in order to assess and critique the meaning and function of phenomenology in his early scholarship, as a prelude to the developing and possibly changing methodologies that he has employed in numerous studies (...)
Export citation
Bookmark
15. Moshe Idel, Perfectiuni care absorb: Cabala si interpretare/ Absorbisg Perfections: Kabbalah and Interpretation.Petru Moldovan - 2004 - Journal for the Study of Religions and Ideologies 3 (9):173-175.
Moshe Idel, Perfectiuni care absorb: Cabala si interpretare Ed. Polirom, Iasi, 2004, prefata de Harold Bloom, traducere de Horia Popescu.
Export citation
Bookmark
16. Halberral, Moshe., Maimonides: Life and Thought. Translated From the Hebrew by Joel Linsider. [REVIEW]Jude P. Dougherty - 2014 - Review of Metaphysics 68 (1):163-165.
Export citation
Bookmark
17. Moshe Idel, cartea şi hermeneutica negativului/ Moshe Idel, The Book and the Hermeneutics of the Negative.Cristina Gavriluta - 2007 - Journal for the Study of Religions and Ideologies 6 (18):226-236.
For the one who studies the socio-anthropology of religions, the book itself is the main character of the fascinating journey that Moshe Idel proposes in Perfections that absorb. Cabala and interpretation Starting from the imaginary of the book in the Judaic mystical literature, as presented by Moshe Idel, we have found four main hypostases of the book: the book as a pre-existent paradigm, the book as creation, the book as a paradox, and the book as a knowledge tool. (...)
No categories
Export citation
Bookmark
18. The Criticism of Democracy in Rabbi EEM Shach's Thought.Moshe Hellinger - 2008 - In Erich Kofmel (ed.), Anti-Democratic Thought. Imprint Academic. pp. 123.
Export citation
Bookmark
19. Effects of Blank-Trial Probes on Concept-Identification Problems with Redundant Relevant Cue Solutions.Moshe J. Levison & Frank Restle - 1973 - Journal of Experimental Psychology 98 (2):368.
Export citation
Bookmark 1 citation
20. For Nathan Rosen on His Seventy-Fifth Birthday.Moshe Carmeli & Alwyn van der Merwe - 1984 - Foundations of Physics 14 (10):923-924.
Export citation
Bookmark
21. Petru Moldovan, Moshe Idel. Dinamica misticii iudaice/ Moshe Idel. Dynamic of Jewish Mystics.Catalin Vasile Bobb - 2005 - Journal for the Study of Religions and Ideologies 4 (11):81-82.
Petru Moldovan, Moshe Idel. Dinamica misticii iudaice Provopress, Cluj, 2005.
Export citation
Bookmark
22. Moshe Idel, Maimonide Şi Mistica Evreiască.Nicolae Iuga - 2007 - Journal for the Study of Religions and Ideologies 6 (18):239-240.
Moshe Idel, Maimonide şi mistica evreiască Trad. rom. Mihaela Frunză, Ed. Dacia, Cluj, 2001.
Export citation
Bookmark
23. Moshe Idel, Cabalistii nocturni/ Nocturnal Kabbalists.Ciprian Lupse - 2005 - Journal for the Study of Religions and Ideologies 4 (11):76-77.
Moshe Idel, Cabalistii nocturni Editura Provopress, Cluj-Napoca, 2005, 81 pp.
No categories
Export citation
Bookmark
24. Special Selection in Logic in Computer Science.Moshe Vardi - 1997 - Journal of Symbolic Logic 62 (2):608-608.
Export citation
Bookmark
25. Medical Ethics: A Compendium of Jewish Moral, Ethical, and Religious Principles in Medical Practice.Moshe David Tendler (ed.) - 1975 - Committee on Religious Affairs, Federation of Jewish Philanthropies of New York.
Export citation
Bookmark
26. Review: Moshe Jarden, The Elementary Theory of $Omega$Free Ax Fields. [REVIEW]A. Prestel - 1987 - Journal of Symbolic Logic 52 (2):567-568.
Export citation
Bookmark
27. Review: Moshe Jarden, Ursel Kiehne, The Elementary Theory of Algebraic Fields of Finite Corank. [REVIEW]A. Prestel - 1987 - Journal of Symbolic Logic 52 (2):567-567.
Export citation
Bookmark
28. This book constitutes the refereed proceedings of the 10th International Conference on Logic Programming, Artificial Intelligence, and Reasoning, LPAR 2003, held in Almaty, Kazakhstan in September 2003. The 27 revised full papers presented together with 3 invited papers were carefully reviewed and selected from 65 submissions. The papers address all current issues in logic programming, automated reasoning, and AI logics in particular description logics, proof theory, logic calculi, formal verification, model theory, game theory, automata, proof search, constraint systems, model checking, (...)
Export citation
Bookmark
29. Moshe Idel, Ascension on High in Jewish Mysticism: Pillars, Lines, Ladders.Mihaela Mudure - 2007 - Journal for the Study of Religions and Ideologies 6 (18):237-238.
Moshe Idel, Ascension on High in Jewish Mysticism: Pillars, Lines, Ladders Budapest:Central European University Press, 2005.
Export citation
Bookmark
30. Moshe Idel, Maimonides and the Jewish Mystic.Petru Moldovan - 2002 - Journal for the Study of Religions and Ideologies 1 (2):215-218.
Moshe Idel, Maimonides and the Jewish Mystic, Dacia Publishing House, Cluj-Napoca, 2001.
Export citation
Bookmark
31. Moshe Idel, Kabbalah - New Perspectives.Petru Moldovan - 2002 - Journal for the Study of Religions and Ideologies 1 (2):212-215.
Moshe Idel, Kabbalah - New Perspectives, Nemira Publishing House, Bucuresti, 2000.
Export citation
Bookmark
32. The Medieval Polemics Between Islam and Judaism.Moshe Perlmann - 1974 - In S. D. Goitein (ed.), Religion in a Religious Age. Cambridge: Mass., Association for Jewish Studies. pp. 130.
Export citation
Bookmark 1 citation
33. Evaluation Anxiety.Moshe Zeidner, Gerald Matthews, A. J. Elliot & C. S. Dweck - 2005 - In Andrew J. Elliot & Carol S. Dweck (eds.), Handbook of Competence and Motivation. The Guilford Press.
Export citation
Bookmark 14 citations
34. Isaac Satanow's "Mishlei Asaf" as Reflecting the Ideology of the German Hebrew Haskalah.Moshe Pelli - 1972 - Beer Sheva, Israel, University of the Negev.
Export citation
Bookmark
35. No categories
Export citation
Bookmark
36. Shaykh Ziyāda.Moshe Perlmann - 1954 - Journal of the American Oriental Society 74 (2):89-91.
No categories
Export citation
Bookmark
37. Book Review. [REVIEW]Moshe Perlmann - 1964 - Journal of the American Oriental Society 84 (4):421-422.
No categories
Export citation
Bookmark
38. The Pirkei Avos Treasury: Ethics of the Fathers: The Sages' Guide to Living.Moshe Lieber & Nosson Scherman (eds.) - unknown - Mesorah Publications.
Export citation
Bookmark
39. Musical Gesture Some Reflections on Richard Wagner's Concept of Music.Moshe Zuckermann - 2014 - Paragrana: Internationale Zeitschrift für Historische Anthropologie 23 (1):101-108.
Export citation
Bookmark
40. Moshe Halbertal , On Sacrifice . Reviewed By.Berel Dov Lerner - 2013 - Philosophy in Review 33 (2):120-122.
Export citation
Bookmark
41. Society in Flux and Changing Values.Moshe Amon - 1970 - Southern Journal of Philosophy 8 (1):45-48.
Export citation
Bookmark
42. False Recognition of Adjective-Noun Phrases.Moshe Anisfeld - 1970 - Journal of Experimental Psychology 86 (1):120.
Export citation
Bookmark
43. Association, Synonymity, and Directionality in False Recognition.Moshe Anisfeld & Margaret Knapp - 1968 - Journal of Experimental Psychology 77 (2):171.
Export citation
Bookmark 55 citations
44. September 11 and You.Moshe Goldberger - 2004 - Feldheim.
9/11 has served as a wake-up call to the entire world.
Export citation
Bookmark
45. Bracketing Irony: Schleiermacher's Heterochrony.Moshe Goultschin - 2011 - Analecta Hermeneutica 3.
Export citation
Bookmark
46. Theories of Art: From Winckelmann to Baudelaire.Moshe Barasch - 1985 - Routledge.
In this volume, the third in his classic series on art theory, Moshe Barasch traces the hidden patterns and interlocking themes in the study of art, from impressionism to abstract art. Barasch details the immense social changes in the creation, presentation, and reception of art which have set the history of art theory on a vertiginous new course: the decreased relevance of workshops and art schools; the replacement of the treatise by the critical review; and the emerging interrelationship between (...)
Export citation
Bookmark 1 citation
47. Modern Theories of Art: From Winckelmann to Baudelaire.Moshe Barasch - 1990 - New York University Press.
In this volume, the third in his classic series of texts surveying the history of art theory, Moshe Barasch traces the hidden patterns and interlocking themes in the study of art, from Impressionism to Abstract Art. Barasch details the immense social changes in the creation, presentation, and reception of art which have set the history of art theory on a vertiginous new course: the decreased relevance of workshops and art schools; the replacement of the treatise by the critical review; (...)
Export citation
Bookmark 2 citations
48. An Analysis of Total Correctness Refinement Models for Partial Relation Semantics I.Moshe Deutsch, Martin Henson & Steve Reeves - 2003 - Logic Journal of the IGPL 11 (3):285-315.
This is the first of a series of papers devoted to the thorough investigation of refinement based on an underlying partial relational model. In this paper we restrict attention to operation refinement. We explore four theories of refinement based on an underlying partial relation model for specifications, and we show that they are all equivalent. This, in particular, sheds some light on the relational completion operator due to Woodcock which underlies data refinement in, for example, the specification language Z. It (...)
Export citation
Bookmark
49. Moshe Halbertal and Avishai Margalit., Idolatry.Jerome Eckstein - 1994 - International Studies in Philosophy 26 (4):135-136.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29437220096588135, "perplexity": 22413.94184019274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710890.97/warc/CC-MAIN-20221202014312-20221202044312-00163.warc.gz"}
|
https://johncarlosbaez.wordpress.com/2016/02/25/gamma-ray-burst/
|
## Hard X-Ray Burst
I just learned something cool: 0.4 seconds after LIGO saw those gravitational waves on 14 September 2015, a satellite named Fermi detected a burst of X-rays!
• V. Connaughton et al, Fermi GBM observations of LIGO gravitational wave event GW150914.
It lasted one second. It was rather weak (for such things). The photons emitted ranged from 50 keV to 10 MeV in energy, with a peak around 3.5 MeV. The paper calls this event a ‘hard X-ray source’. Wikipedia says photons with an energy over 100 keV deserve the name gamma rays, while those between 10 keV and 100 keV are ‘hard X-rays’. So, maybe this event deserves to be maybe a gamma ray burst. I suppose it’s all just a matter of semantics: it’s not as if there’s any sharp difference between a highly energetic X-ray and a low-energy gamma ray.
Whatever you call it, this event does not appear connected with other previously known objects. It’s hard to tell exactly where it happened. But its location is consistent with what little we know about the source of the gravitational waves.
If this X-ray burst was caused by the same event that created the gravitational waves, that would be surprising. Everyone assumed the gravitational waves were formed by two large black holes that had been orbiting each other for millions or billions of years, slowly spiraling down. In this scenario we don’t expect much electromagnetic radiation when the black holes finally collide.
Perhaps those expectations are wrong. Or maybe—just maybe—both the gravitational waves and X-rays were formed during the collapse of a single very large star! That’s what typically causes gamma ray bursts—we think. But it’s not at all typical—as far as we know—for a large star to form two black holes when it collapses! And that’s what we’d need to get that gravitational wave event: two black holes, which then spiral down and merge into one!
Here’s an analysis of the issue:
As he notes, the collapsing star would need to have an insane amount of angular momentum to collapse into a dumb-bell shape and form two black holes, each roughly 30 times the mass of our Sun, which then quickly spiral down and collide.
Furthermore, as Tony Wells pointed to me, the lack of neutrinos argues against the idea that this event involved a large collapsing star:
• ANTARES collaboration, High-energy neutrino follow-up search of Gravitational wave event GW150914 with ANTARES and IceCube.
To add to the muddle, another satellite devoted to observing gamma rays, called INTEGRAL, did not see anything:
It will take a while to sort this out.
But luckily, the first gravitational wave burst seen by LIGO was not the only one! Dennis Overbye of the New York Times writes:
Shortly after the September event, LIGO recorded another, weaker signal that was probably also from black holes, the team said. According to Dr. Weiss, there were at least four detections during the first LIGO observing run, which ended in January. The second run will begin this summer. In the fall, another detector, Advanced Virgo, operated by the European Gravitational Observatory in Italy, will start up. There are hopes for more in the future, in India and Japan.
So we will know more soon!
For more on Fermi:
### 29 Responses to Hard X-Ray Burst
1. jessemckeown says:
Well, if there were two black holes orbiting each other, there might have been three or more stars there before; and with ~3M☉ converted to radiant geometric disturbance, any neighbours may well have felt something of a nudge.
2. WebHubTelescope says:
As a way of trying to understand this, there seem to be two theories proposed.
1. Two black holes that have been in mutual orbit for millions or billions of years, are caught in the moment that they finally collapse into each other.
2. A massive star collapses which then spontaneously creates two black holes and these are not far enough apart so they quickly collapse into each other.
And they think the latter is a preferable theory because gamma rays need a source of mass to emit, and any mass in theory #1 is well in the past.
• John Baez says:
Yes, that’s right.
And here’s the dilemma: theory 1 is implausible because in this scenario we don’t see how a gamma ray burst would occur, and theory 2 is implausible because a star would need to be rotating at an insane rate, stretched out into a kind of dumbbell, to collapse into two separate black holes rather than one.
However, theorists are inventive, and we’re just beginning to see people start trying to explain this event. Jesse’s idea is nice: take theory 1 and put another star nearby, which gets pulverized by gravitational radiation and emits some gamma rays. One needs to get quantitative to see how plausible this might be.
3. domenico says:
I am thinking that a couple of positive (or negative) charged black holes could generate electromagnetic radiation in the relativistic merger, but there is the problem of the 0.4 sec arrival delay of the electromagnetic radiation.
I am thinking that if the emission mechanism is similar (a jet of gravitational wave and a jet of electromagnetic wave with the same directions), then there is only a zone of the space (along the line between the source and the Earth) where the signal is observable.
• Nix says:
The problem there is that charging a black hole like that (and having it retain its charge for any notable amount of time) is probably very hard. Most stuff in space is net-uncharged, after all, and if a hole gets charged it will preferentially attract oppositely-charged particles until it is uncharged again.
• John Baez says:
Yes, I see no mechanism for two black holes to become highly charged. And if they were charged and orbiting each other 250 times per second (as the ones that collided were, at the end), they would produce radiation with a frequency of 500 hertz. Why would they produce gamma rays? Okay, I guess in the merger there would be a kind of ‘spark’: that would be interesting to analyze. But a much more promising method of getting gamma rays is to get some ordinary matter into the act.
• domenico says:
I am thinking that there are simple mechanisms to charge a black hole; for example, some galaxies with different total charge, so that exist zone of space where the charge is positive (or negative) over long distance, or charged stars before the black hole birth because of solar winds, or …
I try a simple solution of the problem: if the gamma-ray burst are localized in jets, how it is possible to observe the arrival of the gamma-ray burst, and gravitational waves, if the two processes have not similar mechanism of production, and similar waves?
If there is not jets, then the processes can be different; but I don’t know if GW150914-GBM is a jet emission.
4. arch1 says:
Can anyone explain the logarithmic term in the formula for the false alarm probability in section 2.2 of Connaughton et al? They say that this term “accounts for the search window trials”. Thanks!
• John Baez says:
I’m not a statistician, so let me just quote the passage you mention and see if someone understands it. Or maybe if I think about it a while it’ll make sense:
We determine the significance of a GBM [gamma-ray burst monitor] counterpart candidate by considering both its frequency of occurrence, and its proximity to the GW [gravitational wave] trigger time. The candidates are assigned a false-alarm probability of $2 \lambda \Delta t$ where $\lambda$ is the candidate’s false-alarm rate in the GBM data, and $\Delta t$ is its absolute time-difference to the GW time. Our method, described in Blackburn (2015) allows us to account for all the search windows in the interval over which we performed our search, while assigning larger significance to those events found closest to the time of interest. This two-parameter ranking method frees us from having to choose a fixed search interval. We can also limit the length of the search interval to a value that is computationally reasonable without fear of truncating our probability distribution.
With a false alarm rate of $4.79 \times 10^{-4}$ Hz for GW150914-GBM, which begins 0.4 s after the time of the GW event, we calculate a false alarm probability for GW150914-GBM, $P = 9.58 10^{-4}$ Hz $\times 0.4 s \times (1 + \ln(30 s / 0.256 s)) = 0.0022,$ where the logarithmic term accounts for the search window trials.
“Hz” means “hertz” which means “per second”. I’m suspicious of anyone who gives three decimals of precision for a false alarm rate: it’s generally hard to know exactly how often you screw up. But that’s not very important.
A more important thing is to understand the role of their “search windows”. Elsewhere these are called “bins”, and they’re said to be .256 seconds long. I guess the idea is that in each such time interval they count the number of photons detected—see Figure 2 in the paper. I don’t see where the figure of 30 seconds is coming from.
I think it would help to read the paper by Blackburn:
• Blackburn et al, Significance of two-parameter coincidence.
5. amarashiki says:
Extreme Astronomy, Astrophysics and Cosmology: gamma rays, X-rays, neutrino astronomy, gravitational wave astronomy, cosmic rays, and much more are pushing ahead our high energy limits…and phenomenology…In time, theory and new theories will follow them…
6. davetweed says:
I wonder if Greg Egan could sue the universe for copyright infringement of a scene from Diaspora? (OK, they were neutron stars with lethal levels of radiation emitted, but plagiarism is bad, mkay?)
• arch1 says:
(chuckle) I think he’d have trouble establishing priority.
7. arch1 says:
Another beginner Q: The calculated probability of false alarm seems to be based on an assumed background rate which is the upper end of a 90% confidence interval.
How can they conclude a ~0.22% probability of false alarm, if there is a 5% chance that their statistically inferred background rate is too small? Shouldn’t they decrease the 5% (which would cause the 0.22% to increase) until the two are at least in the same ballpark?
• John Baez says:
Good question. Again, it takes someone with more understanding of statistics than I have to give a good answer. But I think it’s not necessarily a paradox that the 5% greatly exceeds the ~0.22%. Say there’s a 95% chance that an event occurs once in a millennium (on average) and a 5% chance that it occurs once in a century (on average). You can work out the chance that it occurs on a given day, and it’s a lot less than 5%.
The problem is that there could be a ‘long tail’. E.g. suppose there’s a 95% chance that an event occurs once in a millennium (on average) and a 4.9% chance that it occurs once in a century (on average) and a 0.1% chance that it occurs once an hour. This changes everything.
• arch1 says:
Thanks to your example, John, I see that my thinking was incorrect – the two probabilities involved need not be even roughly comparable.
I now think the “right” way to compute the false alarm rate, given that it depends on an assumed background rate b which is itself uncertain, would be as the integral of
pr{false alarm | b=r} * p(r) dr,
where p is b’s assumed probability density function.
I guess that the people writing and reading such papers have a good sense of when such considerations might make a difference (due e.g. to long tails or sensitive dependencies), and just ignore them otherwise. (At least, I hope they don’t ignore them just because they are inconvenient and messy!)
8. Crippa75 says:
Do they know what the effect would be if a extremely strong gravity wave would create distortion in dark matter or dark energy?
Even if all normal matter is gone there should still be some dark matter clusted nearby.
• John says:
Dark matter, being uncharged, couldn’t be the origin of this flash. I don’t think we really know enough about dark energy to say for sure, but a simple cosmological constant is a pure geometric term and shouldn’t produce any EM flash either.
You might also think about normal matter like a nearby star being disrupted by the GWs, but for this event the GWs would not disrupt a material body even very close to the black holes. The source was ~400 Mpc ~= 1e22 kilometres away, and the peak strain at Earth was 1e-21. The strain falls off like 1/r, so a totally naive estimate would say you need to be only 1000 km from the “source” for the strain to approach 1%, which is well inside the near field where the waves are not linear anyway. I doubt there was any material, dark or not, left hanging about in that area after it was swept clean by a couple of huge black holes zooming about at a large fraction of light speed.
9. Māris Ozols says:
Maybe it was a ring-shaped black hole rather than a collapsing star:
http://www.cam.ac.uk/research/news/five-dimensional-black-hole-could-break-general-relativity
However, that would require 5 dimensions… :)
• John Baez says:
Heh, yes — those ring-shaped 5d black holes have been in the news lately.
By the way, it’s not really known if ring-shaped event horizons are possible in our universe. At one point it was thought that as two black holes collided, their event horizons might momentarily be connected by two bridges, forming an event horizon with the topology of a torus. But so far, detailed simulations have instead given results like this:
So, the idea of a ring-shaped horizon is no longer so popular… but nobody has proved it can never happen.
For details, see:
• Michael I. Cohen and Jeffrey D. Kaplan and Mark A. Scheel, On toroidal horizons in binary black hole inspirals, Phys. Rev. D 85 (2012), 024031.
• arch1 says:
Is there some reason a black hole can’t merge with two others at once?
If not, it hand-wavingly seems one could almost force a torus given enough black holes and enough ingenuity (two carefully oriented and timed colliding necklaces, or something).
• John Baez says:
arch1 wrote:
Is there some reason a black hole can’t merge with two others at once?
No, it’s just incredibly unlikely here in the actual Universe.
If not, it hand-wavingly seems one could almost force a torus given enough black holes and enough ingenuity (two carefully oriented and timed colliding necklaces, or something).
Yes, a ring of small black holes all moving towards each other seem like they could form a torus as they meet. I doubt anyone has tried simulating that yet.
10. […] Baez wrote a bit on it https://johncarlosbaez.wordpress.com/2016/02/25/gamma-ray-burst/ Greg Bernhardt, Feb 26, 2016 at 9:34 […]
11. Jonathan Scott says:
The reference to the energy being “about” 50keV seems wrong; the Fermi team paper says “above” 50keV and the detailed energy spectrum results (section 2.5) show a spread of energies starting above 50keV which peak around 3.5MeV and which appear to continue up to at least 10MeV and possibly even up to 50MeV.
• John Baez says:
Thanks! That’s interesting.
I was going to correct the title of this blog article again, restoring its original title “Gamma ray burst”, but the beginning of this section calls this event “a weak but significant hard X-ray source with a spectrum that extends into the MeV range”—so they seem to think “hard X-ray source” is a suitable term even though Wikipedia calls photons with energy more than 100 keV “gamma rays”. They also say:
For a deconvolution assuming a source position at the northeastern tip of the southern lobe (entry 10 in Table 2), the Comptonized model converges to find a best fit $E_{peak}$ of $3.5^{+2:3}_{-1.1}$ MeV.
where earlier they’d defined $E_{peak}$ to be the “peak energy in the spectral energy distribution”. And they say:
The fit parameter values are typical for short GRBs [gamma-ray bursts].
12. […] Bernhardt said: John Baez wrote a bit on it at https://johncarlosbaez.wordpress.com/2016/02/25/gamma-ray-burst/
Perhaps those expectations are wrong. Or maybe—just maybe […]
13. Peter Bloem says:
Wouldn’t proponents of the single-star-collapsing-hypothesis also need to account for the 0.4 second gap? I know the speed of gravitational waves hasn’t been fully established, but it’s unlikely to be faster than x-rays, right?
So if the collapse of the star causes the gamma rays and the black holes, and the black holes merging causes the gravitational waves, I don’t see how the gravitational waves can possibly arrive first.
• John Baez says:
Peter wrote:
Wouldn’t proponents of the single-star-collapsing-hypothesis also need to account for the 0.4 second gap?
Good point!
I know the speed of gravitational waves hasn’t been fully established, but it’s unlikely to be faster than x-rays, right?
According to general relativity the speed of gravitational waves is exactly the speed of light. Anyone who doubts general relativity can’t trust the usual interpretation of anything we see with LIGO. So, we should assume general relativity is right about the speed of gravitational waves when trying to understand this event: otherwise we are tying one hand behind our back.
In an ordinary type II supernova, formed by a collapsing star, the neutrinos come out before the light. The reason is that neutrinos move at almost the speed of light in vacuum, while the light needs to penetrate through the body of the collapsing star, repeatedly getting absorbed and re-radiated.
A good example is supernova 1987a:
Approximately two to three hours before the visible light from SN 1987A reached Earth, a burst of neutrinos was observed at three separate neutrino observatories. This is likely due to neutrino emission, which occurs simultaneously with core collapse, but preceding the emission of visible light. Transmission of visible light is a slower process that occurs only after the shock wave reaches the stellar surface. At 07:35 UT, Kamiokande II detected 11 antineutrinos; IMB, 8 antineutrinos; and Baksan, 5 antineutrinos; in a burst lasting less than 13 seconds. Approximately three hours earlier, the Mont Blanc liquid scintillator detected a five-neutrino burst, but this is generally not believed to be associated with SN 1987A.
Supernova 1987A is considered a ‘peculiar’ type II supernova: for one thing, nobody has yet seen the neutron star that was expected to form from the collapse! There are various possible explanations: it might have formed a black hole… or it might even have formed a quark star.
14. George Dishman says:
The idea that a “nearby” star might be influenced into some cataclysmic response by the energy released in the merger sounds interesting, the question that springs to mind is how stable would its orbit be if it were at a location away from the pair but the same distance from us, or more generally on a hyperbola with 0.4s path difference. These aren’t SMBH so perhaps a binary BH system within a dense globular cluster could produce such configuration.
Of course the idea is crucially dependent on how much energy the star could extract from the wave burst, the luminosity would be enormous at that range but I would guess that any energy absorption would depend on “tidal heating” of the star.
• John Baez says:
Yes, testing the plausibility of this idea requires some calculations and estimates. Unfortunately I’m not in a position to do them. I sure hope people figure out what happened! And if they do, I hope someone adds another comment on this blog article, to let me know.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 9, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7855585217475891, "perplexity": 1038.3761675218211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542938.92/warc/CC-MAIN-20161202170902-00125-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://collaborate.princeton.edu/en/publications/particle-number-fluctuations-r%C3%A9nyi-entropy-and-symmetry-resolved-
|
# Particle number fluctuations, Rényi entropy, and symmetry-resolved entanglement entropy in a two-dimensional Fermi gas from multidimensional bosonization
Mao Tian Tan, Shinsei Ryu
Research output: Contribution to journalArticlepeer-review
26 Scopus citations
## Abstract
We revisit the computation of particle number fluctuations and the Rényi entanglement entropy of a two-dimensional Fermi gas using multidimensional bosonization. In particular, we compute these quantities for a circular Fermi surface and a circular entangling surface. Both quantities display a logarithmic violation of the area law, and the Rényi entropy agrees with the Widom conjecture. Lastly, we compute the symmetry-resolved entanglement entropy for the two-dimensional circular Fermi surface and find that, while the total entanglement entropy scales as RlnR, the symmetry-resolved entanglement scales as RlnR, where R is the radius of the subregion of our interest.
Original language English (US) 235169 Physical Review B 101 23 https://doi.org/10.1103/PhysRevB.101.235169 Published - Jun 15 2020 Yes
## All Science Journal Classification (ASJC) codes
• Electronic, Optical and Magnetic Materials
• Condensed Matter Physics
## Fingerprint
Dive into the research topics of 'Particle number fluctuations, Rényi entropy, and symmetry-resolved entanglement entropy in a two-dimensional Fermi gas from multidimensional bosonization'. Together they form a unique fingerprint.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9098526239395142, "perplexity": 2791.2887281441144}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103037649.11/warc/CC-MAIN-20220626071255-20220626101255-00157.warc.gz"}
|
https://www.physicsforums.com/threads/velocity-along-a-frictionless-surface.866839/
|
# Velocity along a frictionless surface
• #1
26
3
## Homework Statement
A body moves down along an inclined plane from A(top) to B(bottom), and then moves on the floor in continuation to some point C. (All surfaces are frictionless)
After reaching B, body is having some acceleration. But while moving from B to C,
a) will it keep on accelerating,
b) or, its acceleration will be zero (constant velocity) from B to C.
2. The attempt at a solution
Frictionless surface don't interfere with the motion of the body, so whatever state body is possessing at B (some velocity), this will continue holding, so body will move with zero acceleration from B to C.
• #2
Merlin3189
Homework Helper
Gold Member
1,659
771
The question sounded a bit odd, "After reaching B, body is having some acceleration."
I would say, "Up to point B, body is having some acceleration."
• #3
26
3
The question sounded a bit odd, "After reaching B, body is having some acceleration."
I would say, "Up to point B, body is having some acceleration."
Body is having acceleration at B as it has accelerated from A to B, question is about from B to C.
• #4
145
12
The speed will remain constant from B to C
Why?
Well
Because B and C are on the same horizontal level
And thus there's no question of vertical motion here (they surely aren't going to break the floor and move
And since the horizontal components of the forces acting on the block (gravity and the normal force) are zero
And since there's no friction
The block will keep moving with a constant velocity from B to C
UchihaClan13
Likes rashida564 and Anjum S Khan
• #5
26
3
The speed will remain constant from B to C
Why?
Well
Because B and C are on the same horizontal level
And thus there's no question of vertical motion here (they surely aren't going to break the floor and move
And since the horizontal components of the forces acting on the block (gravity and the normal force) are zero
And since there's no friction
The block will keep moving with a constant velocity from B to C
UchihaClan13
I was confused about the acceleration part.
• #6
145
12
Don't be then :)
The block accelerates from A to B because as there's no friction,the force mgsinθ which acts down the incline,accelerates the block
Over the entire distance the block traverses/moves
Once it reaches B,there is a momentary transition and there's some initial acceleration from B to C
But its momentary and thus it can be neglected!
UchihaClan13
• #7
13
1
Like others have, the speed of the body from B to C will remain constant since there's no force acting on it due to gravity.
• Last Post
Replies
3
Views
1K
• Last Post
Replies
2
Views
7K
• Last Post
Replies
7
Views
3K
• Last Post
Replies
1
Views
3K
• Last Post
Replies
8
Views
2K
• Last Post
Replies
3
Views
3K
• Last Post
Replies
2
Views
8K
• Last Post
Replies
4
Views
1K
• Last Post
Replies
5
Views
9K
• Last Post
Replies
1
Views
657
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8469623327255249, "perplexity": 1871.4306155525164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072175.30/warc/CC-MAIN-20210413062409-20210413092409-00330.warc.gz"}
|
https://mathoverflow.net/questions/195826/quantum-field-theory-integral-notation
|
# Quantum Field theory - integral notation
I have a problem with understanding how the resolution of the identity of an operator is presented in some literature for physicists.
I'm a student of mathematics, and I understand the notion of a spectral measure (which is somethimes called the resolution of identity) and also have some knowledge in spectral theory (for normal operators).
# Here is my brief explanation what do I understand:
Let $$H$$ be Hilbert space (with an inner product linear w.r.t 2nd coordinate) with an orthonormal basis $$(e_{n})$$ and define a linear operator (diagonal operator) $$A = \sum_{i \geq 1} \lambda_i \left|e_i\right>\left where $$(\lambda_i)$$ is a sequence of complex numbers (the properties of this sequences determine the properties of $$A$$ such as boundedness, selfadjointness, compactness etc.), to make sense of $$A$$ we assume that the above series converges in SOT, I also used Dirac bra-ket notation. The associated spectral measure of $$A$$ is defined via $$E(\Delta) = \sum_i \mathbf{1}_{\Delta}(\lambda_i) \left|e_i\right>\left where $$\Delta$$ is an element of Borel sigma field over the spectrum of $$A$$, and we have that $$\left = \int_{\sigma(A)} \lambda \left \ \ (x,y \in H).$$
Very often physicists would use the following notation for $$A$$ which acts on an element $$\psi \in H$$ $$A\left|\psi\right> = \sum_{i} \lambda_i \left|i\right>\left.$$
# My problems with notation
I started reading some notes, books about quantum field theory, and often it is written that, the identity operator $$I$$ on some (separable) Hilbert space $$H$$, has the expansion, called the resolution of the identity $$I= \int dq^{\prime} \left|q^{\prime}\right>\left I don't know whether it matters here but $$\{\left|q\right>\}$$ is supposed to be a complete set of states.
Reference: http://eduardo.physics.illinois.edu/phys582/582-chapter5.pdf bottom of p. 129.
# My question
Is the notion of the above resolution of the identity the same as an integral w.r.t a spectral measure ($$I$$ is a diagonal operator)? If yes, how should I understand the above notation. If no, what do they actually mean by this resolution of the identity and how do they define this integral. I noticed that in a lot of book concerning quantum mechanics there are many calculations, but not very many definitions and assumptions, which makes stuff hard to understand for a mathematician.
The answer is Yes. The interpretation of the notation is quite straight forward: $dq'|q'\rangle\langle q'| = E(dq')$. We need to presume that $E$ is the spectral measure of an operator $Q' = \int q' E(dq') = \int dq'\, q' |q'\rangle\langle q'|$. The only aspect that doesn't necessarily mesh well with your question, as written, is the fact that you've defined a spectral measure $E$ only for operators whose spectrum consists of eigenvalues, since $A|e_i\rangle = \lambda_i |e_i\rangle$. Spectral measures can be defined for operators with any kind of spectrum, including continuous.
• Thanks, I know that you have spectral theorem for any normal operator defined on a Hilbert space, however the notation for the resolution of identity in the example which I gave about was for the identity operator, which is in fact diagonal. One quick question so it supposed to be $$I= \int d q^{\prime} q^{\prime} \left|q^{\prime} \right> \left<q^{\prime}\right|$$ instead of $$I= \int d q^{\prime} \left|q^{\prime} \right> \left<q^{\prime}\right|$$? – Eric Feb 6 '15 at 16:06
• Your first formula gives $Q'$ and not $I$, with $I$ given correctly by your second formula. The prototypical example of $Q'$ is the position operator in quantum mechanics. It has a simple continuous spectrum (in 1-dimension, that is) and that is why its "eigenvectors" are convenient for writing down a resolution of the identity. Essentially, you are representing identity using functional calculus $I = f(Q) = \int dq' f(q') |q'\rangle\langle q'|$, where $f(x) \equiv 1$. – Igor Khavkine Feb 6 '15 at 19:32
• Great answer! Thank you. For the identity we even don't need the functional calculus, we know that $Q^{\prime}$ as a selfadjoint operator admits a unique spectral measure $E$, thus by using the properties of a spectral measure we got $I=E(\sigma(Q^{\prime}))= \int_{\sigma(Q^{\prime})} 1 E(d\lambda)$, which can be of course written in a different notation which involves bra and kets. – Eric Feb 6 '15 at 19:47
See $\S$ 4.4 of de Madrid's "The role of the rigged Hilbert space in quantum mechanics"
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9908544421195984, "perplexity": 1160.3295495924679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988828.76/warc/CC-MAIN-20210507211141-20210508001141-00504.warc.gz"}
|
https://wiki.lyx.org/Tips/Beamer
|
Go to page:
Search: Help
Edit
# Beamer
Categories: Tips, Beamer
<< | Page list | >>
Tips for using the Beamer presentation class.
### Enumerations
• To customize the labels for an enumeration list, put the cursor at the start of the first item in the list and click `Insert > Short Title`. This will create an inset labeled 'opt' for optional arguments. Insert the label you want for the first item there. Beamer will automatically replace any occurrence of 1, i or I with the index of each item in Arabic, lower or upper case Roman numerals respectively. Be sure to include any punctuation you want. For example, XY1: would produce item labels 'XY1:', 'XY2:' etc.
• If you want a label that contains the letter i or I (or a numeral that stays fixed), you need to enclose that part of the label in braces. For instance, '{Hint} I:' will generate labels 'Hint I:', 'Hint II:' etc. but 'Hint I:' will generate labels 'HInt I:', 'HIInt II:' etc. The braces cannot be entered directly; use the TEX button, `Insert > TeX Code` or `Ctrl-L` to add TeX insets to the optional argument inset, then type the braces in them.
• The labels in subitems restart as 1, 2, 3, etc. To get a, b, c, etc. insert in the LaTeX Preamble the command :
``` `\setbeamertemplate{enumerate subitem}{\alph{enumii})}`
```
### Repeating the Title Slide
To repeat the title slide at the end of the presentation (or anywhere in between):
1. add `\renewcommand\makebeamertitle{\frame[label=mytitle]{\maketitle}}` to the document preamble;
2. at the point where you want the title slide to repeat, create a new frame using the AgainFrame environment and type in `mytitle` as the label.
### Versions for Note-taking
The `handout` class option tells Beamer to create a version of the presentation with each frame on a single page. To create a handout with space on each page for the audience to take notes, you can use the `handoutWithNotes` package, available from http://www.guidodiepen.nl/2009/07/creating-latex-beamer-handouts-with-notes/ (with instructions there) (and apparently not available from CTAN). Install the style file into your local `texmf` tree (somewhere under `tex/latex`) and update the LaTeX file database (typically by running `texhash`, but somewhat distribution-specific). Then add the following two lines to your document preamble:
``` \usepackage{handoutWithNotes}
\pgfpagesuselayout{1 on 1 with notes}[letterpaper,border shrink=5mm] ```
You can do various customizations in the second line (`a4paper` rather than `letterpaper` to change the paper size, `2 on 1` rather than `1 on 1` to reduce the number of pages, `landscape` (inside the optional argument) to switch from portrait to landscape mode, and so on. You still need to specify `handout` in the class options field to print one entry per frame, rather than one per overlay.
### Uncovering a Table Row-wise
(This is covered in the Beamer user guide; what follows is mainly adjustments for use within LyX.) To uncover one row of a table at a time, end the last cell in each row (other than the final row and any headings) with `\pause` in a TeX Code (ERT) inset. For more granular control, replace `\pause` with `\uncover<?>{` in ERT at the end of the row above the one you will be uncovering and `}` in ERT at the end of the row being uncovered, where "?" is a valid overlay specification.
The Beamer user guide also offers a tip for using a dark/light alternating background color in the rows of the table. To use it in LyX, add `table` to `Document > Settings > Document Class > Class options > Custom` and something like `\rowcolors[]{1}{blue!20}{blue!10}` in the preamble. That color scheme is the one suggested in the Beamer guide, but you can season it to taste. If you want to use a larger color palette, add `dvipsnames` alongside `table` in the custom class options (separated by a comma).
### Suppressing a Logo on One Slide
This tip is based on an answer posted by Alan Munn at StackExchange. To suppress a logo on selected slides, add the following command to the document preamble: `\newcommand{\nologo}{\setbeamertemplate{logo}{}}`. At the end of the frame prior to the one where you want to remove the logo, add an `EndFrame` environment followed by `{\nologo` in a TeX Code inset (ERT), using a standard environment. Next, build the frame as usual, starting with `BeginFrame` or one of the other frame creation environments. End that frame with another `EndFrame` environment, followed by `}` in ERT. Start the next frame as usual. To suppress the logo from a sequence of consecutive frames, just move the second `EndFrame` and closing `}` to the last frame in the group.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8570958971977234, "perplexity": 1825.653143334826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585209.43/warc/CC-MAIN-20211018190451-20211018220451-00551.warc.gz"}
|
https://www.msri.org/institutions/500010604
|
# Mathematical Sciences Research Institute
Home » institutions » Arizona State University
# Institution Profile
School of Mathematical and Natural Sciences
4701 W. Thunderbird Rd.
Glendale , Az , 85306 United States
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9569704532623291, "perplexity": 15218.383560344944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145774.75/warc/CC-MAIN-20200223123852-20200223153852-00513.warc.gz"}
|
https://tex.stackexchange.com/questions/100323/uncovering-items-with-changing-bullet-color
|
# Uncovering items with changing bullet color
In this case consecutive items get covered. How could I change this behaviour so that all items would be black and only bullets would get different colours?
\documentclass{beamer}
\setbeamercovered{transparent}
\begin{document}
\begin{frame}
\frametitle{Title}
\begin{itemize}
\item<1-> First
\item<2-> Second
\item<3-> Third
\end{itemize}
\end{frame}
\end{document}
• Check the beameruserguide for alert. Like \begin{itemize}[<alert@+>] – bloodworks Feb 28 '13 at 15:24
## 1 Answer
My answer, adapted from the example in the Beamer user guide, p82:
\documentclass{beamer}
\def\colorize<#1>{%
\temporal<#1>{%
\setbeamercolor{item}{fg=blue}%
}{%
\setbeamercolor{item}{fg=red}%
}{%
\setbeamercolor{item}{fg=blue}%
}
}
\setbeamertemplate{itemize item}[triangle]
\begin{document}
\begin{frame}
\frametitle{Title}
\begin{itemize}
\colorize<1> \item First
\begin{itemize}
\colorize<2> \item First a
\colorize<3> \item First b
\end{itemize}
\colorize<4> \item Second
\colorize<5> \item Third
\end{itemize}
There must be a better way of doing it though. Does anyone know how to redefine \item so as to get the desired output without having to use an extra command (\colorize here) in front of each \item?
EDIT: \colorize is now compatible with all levels of itemize environments.
• what about the itemize inside itemize? it's not working! – liberias Feb 28 '13 at 19:59
• I'm not sure I approve of your exclamation mark... Is it supposed to convey urgency or irritation? Anyway, you get the result you want by substituting item for itemize item in the definition of `\colorize'. – jub0bs Feb 28 '13 at 20:09
• sorry for the exclamation mark, tomorrow i have to present my thesis and i'm a bit nervous :] – liberias Feb 28 '13 at 20:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9623299241065979, "perplexity": 3084.3632694214853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250611127.53/warc/CC-MAIN-20200123160903-20200123185903-00165.warc.gz"}
|
http://math.stackexchange.com/questions/433880/prove-that-for-any-invertable-n-times-n-matrix-a-and-any-b-in-mathbbrn
|
# Prove that for any invertable $n\times n$ matrix A, and any $b\in\mathbb{R}^n$, there exists a unique solution to $Ax=b$
I think I've got the two ideas needed to solve this, but it feels like they're not tied together properly. I'm not sure if I'm allowed to do something like this:
Let $A$ be an invertable $n\times n$ matrix, and $b$ be an n-dimensional vector.
\begin{align} Ax=b&\Longrightarrow A^{-1}Ax=A^{-1}b\\ &\Longrightarrow x=A^{-1}b \end{align}
Therefore, there exists at least one solution to the equation $Ax=b$. Additionally, for the equation $Ay=b$:
\begin{align} Ay=b&\Longrightarrow A^{-1}Ay=A^{-1}b\\ &\Longrightarrow y=A^{-1}b\\ &\Longrightarrow y=x \end{align}
Therefore, for any two unique combinations of $A$ and $b$, there is a unique $x$ for $Ax=b$.
The problem I feel exists with this is that I'm doing two separate proofs and referencing one in the other, when I feel like I can only do that if they're combined into one single proof. Am I mistaken?
-
Curiously when I visited this question the only upvoted (and accepted) answer was the unique (sic) one that does not correctly address the interrogation that OP expressed. The point is that unique existence really has two different aspects, and that showing them separately is quite normal. Although one can present them in a combined fashion (see the answer by copper.hat) as an equivalence between two equations (but the equivalence still involves separate implications in two directions). – Marc van Leeuwen May 14 at 11:07
A handy way to deal with uniqueness proofs is to assume by contradiction that there exist distinct solutions.
Assume that $x_1$ and $x_2$ are distinct solutions to $Ax=b$.
Then, $Ax_1 = b$ and $Ax_2 = b$. Since $A$ is invertible, we have $x_1 = A^{-1}b$ and $x_2 = A^{-1}b$. Thus, because $A^{-1}b = A^{-1}b$, we have by transitivity $x_1 = x_2$, but we assumed they are distinct.
Therefore, the solution must be unique.
This is essentially what you're trying to do, but it is not two different proofs.
Instead, we leverage the power of transitivity and reflexivity of equality to show that distinct solutions cannot exist.
-
Basically, you're just showing that if $x_1$ and $x_2$ are solutions of the system, they must be equal. Note that this actually is not a proof by contradiction, since the assumption that they are distinct is unnecessary. That of course does not take away the fact that it is a handy way of thinking about it. – Eric Spreen Jul 1 '13 at 19:01
@EricSpreen Of course, which is why I worded it like that. It is a natural thing to think "well, what if there were two solutions?" The remainder of the proof follows a bit more naturally from there, and it removes some of the uneasiness that comes from just stating equality and hoping it works. – Arkamis Jul 1 '13 at 19:10
This is just doing half the work. Showing unique existence requires showing uniqueness and showing existence. You did not do the latter, whereas the proof OP presents properly does both aspects. Therefore I find this as an answer to the question OP posed quite misleading. – Marc van Leeuwen May 14 at 11:03
@Marcvanleeuwen The implication here was more to clean up the OP's second part, not that there didn't need to be two parts. My comments were maybe misleading. "Not two different proofs" wasn't meant to imply there weren't two steps, but rather that it wasn't necessarily the existence proof applied twice. I'll clean up the wording. – Arkamis May 14 at 12:39
If $A$ is invertible and $b$ is given, then $Ax=b$ iff $x = A^{-1}b$.
-
If $A$ is invertible, left multiplication by $A$ is an isomorphism on $\mathbb{R}^n$. An isomorphism is a bijective linear map. For the linear system $Ax = b$, surjectivity tells us that a solution exists, and by injectivity the solution is unique.
-
Actually, your first calculation shows uniqueness, as starting from $Ax=b$ you infer that $x=A^{-1}b$. But by simply plugging in the value $A^{-1}b$ for $x$ you also get existence as for this choice of $x$ you get $Ax=AA^{-1}b=b$.
Note that the first step used the existence of a left inverse (i.e. you made use of the fact that $A^{-1}A$ is th eidentity), whereas the existence made use of the right inverse property (i.e. that $AA^{-1}$ is the identity).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8815475702285767, "perplexity": 264.30972698622537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446300.49/warc/CC-MAIN-20151124205406-00025-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://ctftime.org/writeup/25665
|
Tags: elliptic ecc curves
Rating:
## TL;DR :
- The Curve given was Parabola
- we had 4 points ( Enough to to recover prime P)
- calculate shared Secret and decrypt Flag
Original writeup (https://gitcdn.xyz/cdn/0verflowme/faf80e911f13a5a264bc45a1870870e3/raw/e242d3ed9b334a962e2420776676ba08edf23e95/HomeBrew.html).
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8037084341049194, "perplexity": 16928.703871890422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950528.96/warc/CC-MAIN-20230402105054-20230402135054-00758.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/dcds.2017165
|
# American Institute of Mathematical Sciences
• Previous Article
Dynamical properties of nonautonomous functional differential equations with state-dependent delay
• DCDS Home
• This Issue
• Next Article
Existence of SRB measures for a class of partially hyperbolic attractors in banach spaces
August 2017, 37(7): 3921-3938. doi: 10.3934/dcds.2017165
## Strong solutions to Cauchy problem of 2D compressible nematic liquid crystal flows
1 School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, China 2 College of Science, Northeast Electric Power University, Jilin 132013, China 3 School of Mathematics, Liaoning University, Shenyang 110036, China
* Corresponding author: S. Zheng
Received July 2015 Revised February 2017 Published April 2017
This paper studies the local existence of strong solutions to the Cauchy problem of the 2D simplified Ericksen-Leslie system modeling compressible nematic liquid crystal flows, coupled via $ρ$ (the density of the fluid), $u$ (the velocity of the field), and $d$ (the macroscopic/continuum molecular orientations). Notice that the technique used for the corresponding 3D local well-posedness of strong solutions fails treating the 2D case, because the $L^p$-norm ($p>2$) of the velocity $u$ cannot be controlled in terms only of $ρ^{\frac{1}{2}}u$ and $\nabla u$ here. In the present paper, under the framework of weighted approximation estimates introduced in [J. Li, Z. Liang, On classical solutions to the Cauchy problem of the two-dimensional barotropic compressible Navier-Stokes equations with vacuum, J. Math. Pures Appl. (2014) 640-671] for Navier-Stokes equations, we obtain the local existence of strong solutions to the 2D compressible nematic liquid crystal flows.
Citation: Yang Liu, Sining Zheng, Huapeng Li, Shengquan Liu. Strong solutions to Cauchy problem of 2D compressible nematic liquid crystal flows. Discrete & Continuous Dynamical Systems, 2017, 37 (7) : 3921-3938. doi: 10.3934/dcds.2017165
##### References:
show all references
##### References:
[1] Sili Liu, Xinhua Zhao, Yingshan Chen. A new blowup criterion for strong solutions of the compressible nematic liquid crystal flow. Discrete & Continuous Dynamical Systems - B, 2020, 25 (11) : 4515-4533. doi: 10.3934/dcdsb.2020110 [2] Qiang Tao, Ying Yang. Exponential stability for the compressible nematic liquid crystal flow with large initial data. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1661-1669. doi: 10.3934/cpaa.2016007 [3] Yang Liu, Xin Zhong. On the Cauchy problem of 3D nonhomogeneous incompressible nematic liquid crystal flows with vacuum. Communications on Pure & Applied Analysis, 2020, 19 (11) : 5219-5238. doi: 10.3934/cpaa.2020234 [4] Xiaoli Li. Global strong solution for the incompressible flow of liquid crystals with vacuum in dimension two. Discrete & Continuous Dynamical Systems, 2017, 37 (9) : 4907-4922. doi: 10.3934/dcds.2017211 [5] Bagisa Mukherjee, Chun Liu. On the stability of two nematic liquid crystal configurations. Discrete & Continuous Dynamical Systems - B, 2002, 2 (4) : 561-574. doi: 10.3934/dcdsb.2002.2.561 [6] M. Gregory Forest, Hongyun Wang, Hong Zhou. Sheared nematic liquid crystal polymer monolayers. Discrete & Continuous Dynamical Systems - B, 2009, 11 (2) : 497-517. doi: 10.3934/dcdsb.2009.11.497 [7] Jihong Zhao, Qiao Liu, Shangbin Cui. Global existence and stability for a hydrodynamic system in the nematic liquid crystal flows. Communications on Pure & Applied Analysis, 2013, 12 (1) : 341-357. doi: 10.3934/cpaa.2013.12.341 [8] Chun Liu, Huan Sun. On energetic variational approaches in modeling the nematic liquid crystal flows. Discrete & Continuous Dynamical Systems, 2009, 23 (1&2) : 455-475. doi: 10.3934/dcds.2009.23.455 [9] Zhaoyang Qiu, Yixuan Wang. Martingale solution for stochastic active liquid crystal system. Discrete & Continuous Dynamical Systems, 2021, 41 (5) : 2227-2268. doi: 10.3934/dcds.2020360 [10] Tong Tang, Yongfu Wang. Strong solutions to compressible barotropic viscoelastic flow with vacuum. Kinetic & Related Models, 2015, 8 (4) : 765-775. doi: 10.3934/krm.2015.8.765 [11] Xiaoli Li, Boling Guo. Well-posedness for the three-dimensional compressible liquid crystal flows. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 1913-1937. doi: 10.3934/dcdss.2016078 [12] Zhiyuan Geng, Wei Wang, Pingwen Zhang, Zhifei Zhang. Stability of half-degree point defect profiles for 2-D nematic liquid crystal. Discrete & Continuous Dynamical Systems, 2017, 37 (12) : 6227-6242. doi: 10.3934/dcds.2017269 [13] Qiao Liu, Ting Zhang, Jihong Zhao. Well-posedness for the 3D incompressible nematic liquid crystal system in the critical $L^p$ framework. Discrete & Continuous Dynamical Systems, 2016, 36 (1) : 371-402. doi: 10.3934/dcds.2016.36.371 [14] Francisco Guillén-González, Mouhamadou Samsidy Goudiaby. Stability and convergence at infinite time of several fully discrete schemes for a Ginzburg-Landau model for nematic liquid crystal flows. Discrete & Continuous Dynamical Systems, 2012, 32 (12) : 4229-4246. doi: 10.3934/dcds.2012.32.4229 [15] Yinxia Wang. A remark on blow up criterion of three-dimensional nematic liquid crystal flows. Evolution Equations & Control Theory, 2016, 5 (2) : 337-348. doi: 10.3934/eect.2016007 [16] Dongfen Bian, Yao Xiao. Global well-posedness of non-isothermal inhomogeneous nematic liquid crystal flows. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1243-1272. doi: 10.3934/dcdsb.2020161 [17] Hao Wu. Long-time behavior for nonlinear hydrodynamic system modeling the nematic liquid crystal flows. Discrete & Continuous Dynamical Systems, 2010, 26 (1) : 379-396. doi: 10.3934/dcds.2010.26.379 [18] T. Tachim Medjo. On the existence and uniqueness of solution to a stochastic simplified liquid crystal model. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2243-2264. doi: 10.3934/cpaa.2019101 [19] Tong Tang, Hongjun Gao. Local strong solutions to the compressible viscous magnetohydrodynamic equations. Discrete & Continuous Dynamical Systems - B, 2016, 21 (5) : 1617-1633. doi: 10.3934/dcdsb.2016014 [20] Bingyuan Huang, Shijin Ding, Huanyao Wen. Local classical solutions of compressible Navier-Stokes-Smoluchowski equations with vacuum. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 1717-1752. doi: 10.3934/dcdss.2016072
2020 Impact Factor: 1.392
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4335353970527649, "perplexity": 3800.9034649157757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588242.22/warc/CC-MAIN-20211027181907-20211027211907-00139.warc.gz"}
|
https://aviation.stackexchange.com/questions/22616/how-is-a-traffic-pattern-oriented-entered-and-exited
|
# How is a traffic pattern oriented, entered and exited?
If you've received traffic entry instructions from Heathrow Tower (sim) to fly left downwind for runway 9L, does that mean to be to the left of runway 9L (for a right-turning traffic pattern, where downwind is north of the runway) or to enter a left-turning traffic pattern? I ask because the latter would take you over runway 9R/27L and I'm unsure whether this is a problem or whether it's ok because of the difference in altitude between a plane in 9L's downwind and 9R's/27L's final/upwind.
Also, if you've received clearance from runway 9L to depart north, should the departure be done from the traffic pattern? E.g. you take-off, turn crosswind, turn downwind then turn base and depart the airspace? Or should you continue straight until you've left the airspace, then change course?
• Why Heathrow? This would never happen at Heathrow which does not have "standard" patterns. All arrivals and depatures are via SIDs and STARs or vectors from ATC. – Simon Nov 2 '15 at 11:16
• I should've added that a) this is a sim (which explains why the question might not make sense) and b) I'm quite new to aviation and only chose Heathrow because it's a familiar place. Would there be an answer if it were another airport with identical runways that did have standard patterns? – Rich Jenks Nov 2 '15 at 11:26
• Left downwind means left-hand turns in the circuit, so you should pass with runway 09 on your left, travelling on heading 270 - then do a few left turns and you're on final. You don't cross over the other runway's flight path. (And yes, I appreciate you're asking about a simulation so you're not really flying visual circuits into Heathrow!) – Andy Nov 2 '15 at 11:54
• One reason you'd never fly to Heathrow is the \$3500 landing fee. – GdD Nov 2 '15 at 11:56
• VFR or IFR? And you received the clearance to join the left traffic pattern RWY 9L from where? Entering from which direction? And where did you receive the clearance, VATSIM or IVAO? :D – SentryRaven Nov 2 '15 at 12:49
The instruction to join a "left downwind" is used to clear an aircraft into a standard traffic pattern with lefthand turns. The word "left" could be omitted, as only right turns and righthand traffic patterns are called with "right". In the case of EGLL - London Heathrow the standard pattern would be a northern traffic pattern for runways 09L/09R and a southern traffic pattern for runway 27L/27R.
If you should come to the unlikely situation that you are VFR and want to depart from Heathrow, the clearance to leave the CTR in a northerly direction would be best executed leaving the crosswind, but clarification on frequency never killed anybody:
A: Heathrow Tower, G-ABCD, request leave CTR direct northbound after crosswind 09L.
or
A: Heathrow Tower, G-ABCD, request heading 360 after turning crosswind 09L.
When departing an aerodrome with traffic, it is unwise to leave the CTR along the extended centerline/upwind, as you are impeding other traffic, especially in slower aircraft.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26976901292800903, "perplexity": 5188.820580289728}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250604849.31/warc/CC-MAIN-20200121162615-20200121191615-00551.warc.gz"}
|
http://www.physicsforums.com/showthread.php?s=4fdc7c89ae40760811eb8bbc973c8ef3&p=3958549
|
# Expanding Gamma function around poles
by DMESONS
Tags: expanding, function, gamma, poles
P: 27 Can someone help me to expand the following gamma functions around the pole ε, at fisrt order in ε $\Gamma[(1/2) \pm (ε/2)]$ where ε= d-4
Sci Advisor P: 3,448 Γ(½ ± ε/2) ≈ Γ(½) ± ε/2 Γ'(½) No, seriously.. Well, you also need to use the digamma function, ψ(x) = Γ'(x)/Γ(x). And the values Γ(½) = √π and ψ(½) = - γ - 2 ln 2 where γ is Euler's constant.
PF Patron P: 413 $$\Gamma(\frac{1}{2} - \frac{\epsilon}{2}) = \sqrt{\pi }+\frac{1}{2} \sqrt{\pi } \epsilon (\gamma_E +\log (4))+O\left(\epsilon ^2\right)$$ $$\Gamma(\frac{1}{2} + \frac{\epsilon}{2}) = \sqrt{\pi }+\frac{\sqrt{\pi } \epsilon \psi ^{(0)}\left(\frac{1}{2}\right)}{2}+O\left(\epsilon ^2\right)$$
P: 27
## Expanding Gamma function around poles
Bill_K and Hepth, I am so grateful for your help
I am new in this subject
Related Discussions Calculus 1 Calculus 2 General Math 1 Linear & Abstract Algebra 0 Calculus & Beyond Homework 4
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.954403817653656, "perplexity": 9204.105082822365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163049020/warc/CC-MAIN-20131204131729-00024-ip-10-33-133-15.ec2.internal.warc.gz"}
|
http://www.mathworks.com/help/symbolic/mupad_ug/z-transforms.html?requestedDomain=www.mathworks.com&nocookie=true
|
# Documentation
### This is machine translation
Translated by
Mouseover text to see original. Click the button below to return to the English verison of the page.
## Z-Transforms
The Z-transform of the function `F(z)` is defined as follows:
`$F\left(z\right)=\sum _{k=0}^{\infty }\frac{f\left(k\right)}{{z}^{k}}$`
If `R` is a positive number, such that the function `F(Z)` is analytic on and outside the circle ```|z| = R```, then the inverse Z-transform is defined as follows:
`$f\left(k\right)=\frac{1}{2\pi i}\underset{|z|=R}{\oint }F\left(z\right){z}^{k-1}dz,\text{ }k=0,1,2...$`
You can consider the Z-transform as a discrete equivalent of the Laplace transform.
To compute the Z-transform of an arithmetical expression, use the `ztrans` function. For example, compute the Z-transform of the following expression:
`S := ztrans(sinh(n), n, z)`
``` ```
If you know the Z-transform of an expression, you can find the original expression or a mathematically equivalent form by computing the inverse Z-transform. To compute the inverse Z-transform, use the `iztrans` function. For example, compute the inverse Z-transform of the expression `S`:
`iztrans(S, z, n)`
``` ```
Suppose, you compute the Z-transform of an expression, and then compute the inverse Z-transform of the result. In this case, MuPAD® can return an expression that is mathematically equivalent to the original one, but presented in a different form. For example, compute the Z-transform of the following expression:
`C := ztrans(exp(n), n, z)`
``` ```
Now, compute the inverse Z-transform of the resulting expression `C`. The result differs from the original expression:
`invC := iztrans(C, z, n)`
``` ```
Simplifying the resulting expression `invC` gives the original expression:
`simplify(invC)`
``` ```
Besides arithmetical expressions, the `ztrans` and `iztrans` functions also accept matrices of arithmetical expressions. For example, compute the Z-transform of the following matrix:
```A := matrix(2, 2, [1, n, n + 1, 2*n + 1]): ZA := ztrans(A, n, z)```
``` ```
Computing the inverse Z-transform of `ZA` gives the original matrix `A`:
`iztrans(ZA, z, n)`
``` ```
The `ztrans` and `iztrans` functions let you evaluate the transforms of an expression or a matrix at a particular point. For example, evaluate the Z-transform of the following expression for the value `z = 2`:
`ztrans(1/n!, n, 2)`
``` ```
Evaluate the inverse Z-transform of the following expression for the value `n = 10`:
`iztrans(z/(z - exp(x)), z, 10)`
``` ```
If MuPAD cannot compute the Z-transform or the inverse Z-transform of an expression, it returns an unresolved transform:
`ztrans(f(n), n, z)`
``` ```
`iztrans(F(z), z, n)`
``` ```
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9851475954055786, "perplexity": 473.18699353362115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00407-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://www.unisannio.it/it/biblio?f%5Bauthor%5D=13091
|
UNIVERSITÀ DEGLI STUDI
DEL SANNIO Benevento
# Pubblicazioni di ateneo
Found 33 results
Author Titolo Tipo [ Anno]
Filters: Author is Silvestrini, P. [Clear All Filters]
2018
Physica C: Superconductivity and its Applications, vol. 555, pp. 35-38, 2018.
IEEE Transactions on Applied Superconductivity, vol. 28, no. 4, 2018.
2016
IEEE Transactions on Applied Superconductivity, vol. 26, no. 3, 2016.
2012
Physics Procedia, vol. 36, pp. 371-376, 2012.
2010
Journal of Physics: Conference Series, vol. 234, no. PART 4, 2010.
2007
Open Systems and Information Dynamics, vol. 14, no. 2, pp. 209-216, 2007.
Physics Letters, Section A: General, Atomic and Solid State Physics, vol. 370, no. 5-6, pp. 499-503, 2007.
IEEE Transactions on Applied Superconductivity, vol. 17, no. 2, pp. 132-135, 2007.
2006
Quantum Computing in Solid State Systems, pp. 103-110, 2006.
Quantum Computing in Solid State Systems, pp. 1-337, 2006.
Physics Letters, Section A: General, Atomic and Solid State Physics, vol. 356, no. 6, pp. 435-438, 2006.
Journal of Physics: Conference Series, vol. 43, no. 1, pp. 1401-1404, 2006.
Journal of Physics: Conference Series, vol. 43, no. 1, pp. 1405-1408, 2006.
2005
Applied Physics Letters, vol. 87, no. 17, pp. 1-3, 2005.
Physics Letters, Section A: General, Atomic and Solid State Physics, vol. 336, no. 1, pp. 71-75, 2005.
2004
Institute of Physics Conference Series, vol. 181, pp. 101-107, 2004.
Superconductor Science and Technology, vol. 17, no. 5, pp. S385-S388, 2004.
Physical Review B - Condensed Matter and Materials Physics, vol. 70, no. 17, pp. 1-4, 2004.
2003
IEEE Transactions on Applied Superconductivity, vol. 13, no. 2 I, pp. 1001-1004, 2003.
International Journal of Modern Physics B, vol. 17, no. 4-6 II, pp. 762-767, 2003.
2002
Applied Physics Letters, vol. 80, no. 16, pp. 2952-2954, 2002.
Physica C: Superconductivity and its Applications, vol. 372-376, no. PART 1, pp. 185-188, 2002.
2001
IEEE Transactions on Applied Superconductivity, vol. 11, no. 1 I, pp. 994-997, 2001.
Applied Physics Letters, vol. 79, no. 8, pp. 1145-1147, 2001.
2000
International Journal of Modern Physics B, vol. 14, no. 25-27, pp. 3050-3055, 2000.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9719936847686768, "perplexity": 2014.0341883040294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315544.11/warc/CC-MAIN-20190820133527-20190820155527-00367.warc.gz"}
|
https://www.cfd-online.com/W/index.php?title=Realisability_and_Schwarz'_inequality&oldid=7717
|
Realisability and Schwarz' inequality
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Realisability is the minimum requirement to prevent a turbulence model generating non-physical results. For a model to be realisable the normal Reynolds stresses must be non-negative and the Schwarz' inequality must be satisfied between fluctuating quantities:
$\left\langle{u^'_\alpha u^'_\alpha}\right\rangle \geq 0$
$\frac{\left\langle{u^'_\alpha u^'_\beta}\right\rangle}{ \left\langle{u^'_\alpha u^'_\beta}\right\rangle \left\langle{u^'_\alpha u^'_\beta}\right\rangle} \leq 1$
where there is no summation over the indices. Some workers only apply the first inequality to satisfy realisability, or maintain non-negative vales of k and epsilon. This "weak" form of realisability is satisfied in non-linear models by setting $C_\mu=0.09$.
References
Speziale, C.G. (1991), "Analytical methods for the development of Reynolds-stress closures in turbulence", Ann. Rev. Fluid Mechanics, Vol. 23, pp107-157.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6511232256889343, "perplexity": 4674.201217975195}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690268.19/warc/CC-MAIN-20170925002843-20170925022843-00369.warc.gz"}
|
http://outshine-the-sun.blogspot.com/2015/06/estranged-notions-6-varieties-of.html
|
Monday, 1 June 2015
Estranged Notions: The 6 Varieties of Atheism (and Which Are Most Defensible)
Today's post:
The 6 Varieties of Atheism (and Which Are Most Defensible)
Unsurprisingly, Feser fails to avoid taking his usual potshots at his bêtes noires, the "New Atheists". Other than that his categorization is quite simplistic (and fascinating in that he doesn't seem to notice that he defines 9 varieties, not 6).
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.963314414024353, "perplexity": 6114.352724356277}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589557.39/warc/CC-MAIN-20180717031623-20180717051623-00542.warc.gz"}
|
http://kg15.herokuapp.com/abstracts/253
|
# Even orientations of graphs.
### Domenico Labbate Dipartimento di Matematica, Informatica ed Economia - Università degli Studi della Basilicata - Potenza (Italy)
#### John Sheehan Department of Mathematical Sciences, King's College, Aberdeen (Scotland)
PDF
Minisymposium: GENERAL SESSION TALKS
Content: A graph $G$ is $1$--extendable if every edge belongs to at least one $1$--factor. %An orientation of a graph $G$ is an assignment of a {\em direction} to each %edge of $G$. Now suppose that Let $G$ be a graph with a $1$--factor $F$. Then an {\em even $F$--orientation} of $G$ is an orientation in which each $F$--alternating cycle has exactly an even number of edges directed in the same fixed direction around the cycle. We examine the structure of $1$--extendible graphs $G$ which have no even $F$--orientation where $F$ is a fixed $1$--factor of $G$ and we give a characterization for $k$--regular graphs with $k\ge 3$ and graphs with connectivity at least four. Moreover, we will point out a relationship between our results on even orientations and Pfaffian graphs.
Back to all abstracts
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8163431286811829, "perplexity": 1013.1870017308454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038056325.1/warc/CC-MAIN-20210416100222-20210416130222-00342.warc.gz"}
|
https://www.jcdp.or.kr/journal/view.php?number=43
|
Journal of Coastal Disaster Prevention 2015;2(3):107-112. Published online July 30, 2015.
Risk Analysis of Breakwater Caisson Under Wave Attack Part II : load surface approximation Dong-Hyawn Kim 파랑하중을 받는 방파제 케이슨의 위험도 분석 Part II 김동현 Abstract A new load surface based approach to reliability analysis of caisson type breakwater is proposed. Uncertainties of horizontal and vertical wave load acting on breakwater are considered by using the so called load surfaces which can be estimated as a function of wave height, water level, and etc. Then, gradient based reliability analysis such as First Order Reliability Method(FORM) can be applied to find out probability of failure under wave action. Therefore, reliability analysis of breakwaters with uncertainties both in wave height and water level can be possible. In addition, uncertainty in wave breaking can be taken into account by using wave height ratio which relates significant wave height with maximum one. In numerical examples, proposed approach was applied to reliability analysis of caisson breakwater under wave attack which may undergo partial or full wave breaking. Key Words: Load surface; Reliability; Caisson; Breakwater; Wave breaking; FORM
TOOLS
METRICS
• 134 View
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9390890598297119, "perplexity": 3562.231840166621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662533972.17/warc/CC-MAIN-20220520160139-20220520190139-00346.warc.gz"}
|
https://chem.libretexts.org/Courses/Santa_Barbara_City_College/SBCC_Chem_101%3A_Introductory_Chemistry/00%3A_Front_Matter/03%3A_Table_of_Contents
|
This is a LibreText ebook used to support CHEM101, It maps open source information onto the outline of Tro's Introductory Chemistry text but has been slightly modified to reflect the order of topics taught in the course.
• ## 1: The Chemical World
Chemistry is the study of matter and the ways in which different forms of matter combine with each other. You study chemistry because it helps you to understand the world around you. Everything you touch or taste or smell is a chemical, and the interactions of these chemicals with each other define our universe.
• ## 2: Measurement and Problem Solving
Chemistry, like all sciences, is quantitative. It deals with quantities, things that have amounts and units. Dealing with quantities is very important in chemistry, as is relating quantities to each other. In this chapter, we will discuss how we deal with numbers and units, including how they are combined and manipulated.
• ## 5: Molecules and Compounds
There are many substances that exist as two or more atoms connected together so strongly that they behave as a single particle. These multiatom combinations are called moleculesThe smallest part of a substance that has the physical and chemical properties of that substance.. A molecule is the smallest part of a substance that has the physical and chemical properties of that substance. In some respects, a molecule is similar to an atom. A molecule, however, is composed of more than one atom.
• ## 7: Chemical Reactions
How do we compare amounts of substances to each other in chemical terms when it is so difficult to count to a hundred billion billion? Actually, there are ways to do this, which we will explore in this chapter. In doing so, we will increase our understanding of stoichiometry, which is the study of the numerical relationships between the reactants and the products in a balanced chemical reaction.
• ## 8: Gases
Gases have no definite shape or volume; they tend to fill whatever container they are in. They can compress and expand, sometimes to a great extent. Gases have extremely low densities, one-thousandth or less the density of a liquid or solid. Combinations of gases tend to mix together spontaneously; that is, they form solutions. Air, for example, is a solution of mostly nitrogen and oxygen. Any understanding of the properties of gases must be able to explain these characteristics.
• ## 10: Chemical Bonding
How do atoms make compounds? Typically they join together in such a way that they lose their identities as elements and adopt a new identity as a compound. These joins are called chemical bonds. But how do atoms join together? Ultimately, it all comes down to electrons. Before we discuss how electrons interact, we need to introduce a tool to simply illustrate electrons in an atom.
• ## 11: Liquids, Solids, and Intermolecular Forces
In Chapter 6, we discussed the properties of gases. Here, we consider some properties of liquids and solids. As a review, the Table below lists some general properties of the three phases of matter.
• ## 12: Solubility & Reaction Types
A chemical reaction is a process that leads to the transformation of one set of chemical substances to another. Chemical reactions encompass changes that only involve the positions of electrons in the forming and breaking of chemical bonds between atoms, with no change to the nuclei (no change to the elements present), and can often be described by a chemical equation.
• ## 13: Solutions
Solutions play a very important role in many biological, laboratory, and industrial applications of chemistry. Of particular importance are solutions involving substances dissolved in water, or aqueous solutions. Solutions represent equilibrium systems, and the lessons learned in our last unit will be of particular importance again. Quantitative measurements of solutions are another key component of this unit.
• ## 14: Acids and Bases
Acids and bases are common substances found in many every day items, from fruit juices and soft drinks to soap. In this unit we'll exam what the properties are of acids and bases, and learn about the chemical nature of these important compounds. You'll learn what pH is and how to calculate the pH of a solution.
• ## 15: Radioactivity and Nuclear Chemistry
Radioactivity has a colorful history and clearly presents a variety of social and scientific dilemmas. In this chapter we will introduce the basic concepts of radioactivity, nuclear equations and the processes involved in nuclear fission and nuclear fusion.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8141797184944153, "perplexity": 815.530903968242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153816.3/warc/CC-MAIN-20210729043158-20210729073158-00119.warc.gz"}
|
http://www.english-efl.com/spelling-rules/
|
# Spelling rules
### Final Silent “e”
If a word ends in a consonant followed by a silent “e”, drop the “e” before endings beginning with a vowel, but keep the “e” before endings beginning with a consonant:
engage becomes engaging but engagement
care becomes caring but careful
fate becomes fatal but fateful
scarce becomes scarcity but scarcely
### Spelling words with “ei” and “ie”
When the sound is a long “e” (as in feed), write “i” before “e”, except after “c”. After “c” reverse the spelling (“ei”):
After other letters
believe, yield, reprieve
After c
ceiling, perceive, conceit
The problem with this rule is that it works only when “ei”/”ie” sounds like the “ee” in feet. If it has any other sound, you should write “ei” even after letters other than “c”:
foreign, vein, freight
### Spelling final “y” before a suffix
When a word ends in “y” preceded by a consonant, you should usually change the “y” to “i” before adding the suffix:
curly becomes curlier
party becomes parties
thirty becomes thirties, thirtieth
However, if the suffix already begins with “i”, keep the “y” (except before the suffix “-ize”):
thirty becomes thirtyish
fry becomes frying
agony becomes agonize
memory becomes memorize
When the ending “y” is preceded by a vowel (“a” “e” “i” “o” or “u”), “y” does not change to “i”:
journey becomes journeying
trolley becomes trolleys
### Spelling Words with Double Consonants
Double the final consonant before a suffix beginning with a vowel if both of the following are true: the consonant ends a stressed syllable or a one-syllable word, and the consonant is preceded by a single vowel:
drag becomes dragged
wet becomes wetter
occur becomes occurred, occurring
refer becomes referral, referring
## 2 Replies to “Spelling rules”
1. Pingback: Free Piano
2. Pingback: journeying
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8721252679824829, "perplexity": 18975.57765074657}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258621.77/warc/CC-MAIN-20190526025014-20190526051014-00133.warc.gz"}
|
https://networkx.github.io/documentation/latest/reference/algorithms/generated/networkx.algorithms.bipartite.matching.minimum_weight_full_matching.html
|
# networkx.algorithms.bipartite.matching.minimum_weight_full_matching¶
minimum_weight_full_matching(G, top_nodes=None, weight='weight')[source]
Returns the minimum weight full matching of the bipartite graph G.
Let $$G = ((U, V), E)$$ be a complete weighted bipartite graph with real weights $$w : E \to \mathbb{R}$$. This function then produces a maximum matching $$M \subseteq E$$ which, since the graph is assumed to be complete, has cardinality
$\lvert M \rvert = \min(\lvert U \rvert, \lvert V \rvert),$
and which minimizes the sum of the weights of the edges included in the matching, $$\sum_{e \in M} w(e)$$.
When $$\lvert U \rvert = \lvert V \rvert$$, this is commonly referred to as a perfect matching; here, since we allow $$\lvert U \rvert$$ and $$\lvert V \rvert$$ to differ, we follow Karp 1 and refer to the matching as full.
Parameters
• G (NetworkX graph) – Undirected bipartite graph
• top_nodes (container) – Container with all nodes in one bipartite node set. If not supplied it will be computed.
• weight (string, optional (default=’weight’)) – The edge data key used to provide each value in the matrix.
Returns
matches – The matching is returned as a dictionary, matches, such that matches[v] == w if node v is matched to node w. Unmatched nodes do not occur as a key in matches.
Return type
dictionary
Raises
• ValueError – Raised if the input bipartite graph is not complete.
• ImportError – Raised if SciPy is not available.
Notes
The problem of determining a minimum weight full matching is also known as the rectangular linear assignment problem. This implementation defers the calculation of the assignment to SciPy.
References
1
Richard Manning Karp: An algorithm to Solve the m x n Assignment Problem in Expected Time O(mn log n). Networks, 10(2):143–152, 1980.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.762461245059967, "perplexity": 1278.0347221057184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144708.87/warc/CC-MAIN-20200220070221-20200220100221-00507.warc.gz"}
|
http://www.computer.org/csdl/trans/tp/2009/08/ttp2009081502-abs.html
|
Subscribe
Issue No.08 - August (2009 vol.31)
pp: 1502-1509
Rozenn Dahyot , Trinity College Dublin, Dublin
ABSTRACT
The Standard Hough Transform is a popular method in image processing and is traditionally estimated using histograms. Densities modeled with histograms in high dimensional space and/or with few observations, can be very sparse and highly demanding in memory. In this paper, we propose first to extend the formulation to continuous kernel estimates. Second, when dependencies in between variables are well taken into account, the estimated density is also robust to noise and insensitive to the choice of the origin of the spatial coordinates. Finally, our new statistical framework is unsupervised (all needed parameters are automatically estimated) and flexible (priors can easily be attached to the observations). We show experimentally that our new modeling encodes better the alignment content of images.
INDEX TERMS
Hough transform, Radon transform, kernel probability density function, uncertainty, line detection.
CITATION
Rozenn Dahyot, "Statistical Hough Transform", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.31, no. 8, pp. 1502-1509, August 2009, doi:10.1109/TPAMI.2008.288
REFERENCES
[1] P. Hough, “Methods of Means for Recognizing Complex Patterns,” US Patent 3 069 654, 1962. [2] R.O. Duda and P.E. Hart, “Use of the Hough Transformation to Detect Lines and Curves in Pictures,” Comm. ACM, vol. 15, pp. 11-15, Jan. 1972. [3] J.-Y. Goulermas and P. Liatsis, “Incorporating Gradient Estimations in a Circle-Finding Probabilistic Hough Transform,” Pattern Analysis and Applications, vol. 2, pp. 239-250, 1999. [4] A. Goldenshluger and A. Zeevi, “The Hough Transform Estimator,” The Annals of Statistics, vol. 32, no. 5, pp. 1908-1932, Oct. 2004. [5] A.S. Aguado, E. Montiel, and M.S. Nixon, “Bias Error Analysis of the Generalized Hough Transform,” J. Math. Imaging and Vision, vol. 12, pp. 25-42, 2000. [6] M. Bober and J. Kittler, “Estimation of Complex Multimodal Motion: An Approach Based on Robust Statistics and Hough Transform,” Image and Vision Computing J., vol. 12, no. 10, pp. 661-668, Dec. 1994. [7] P. Ballester, “Hough Transform and Astronomical Data Analysis,” Vistas in Astronomy, vol. 40, no. 4, pp. 479-485, 1996. [8] G.R.J. Cooper and D.R. Cowan, “The Detection of Circular Features in Irregularly Spaced Data,” Computers & Geosciences, vol. 30, no. 1, pp. 101-105, Feb. 2004. [9] C. Schmid, R. Mohr, and C. Bauckhage, “Evaluation of Interest Point Detectors,” Int'l J. Computer Vision, vol. 37, no. 2, pp. 151-172, 2000. [10] B. Schiele and J.L. Crowley, “Recognition without Correspondence Using Multidimensional Receptive Field Histograms,” Int'l J. Computer Vision, vol. 36, no. 1, pp. 31-50, Jan. 2000. [11] B.W. Silverman, Density Estimation for Statistics and Data Analysis. Chapman and Hall, 1986. [12] D. Comaniciu and P. Meer, “Mean Shift: A Robust Approach Toward Feature Space Analysis,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 5, pp. 603-619, May 2002. [13] R. Dahyot, P. Charbonnier, and F. Heitz, “Unsupervised Statistical Change Detection in Camera-in-Motion Video,” Proc. IEEE Int'l Conf. Image Processing, Oct. 2001. [14] Q. Ji and R.M. Haralick, “Error Propagation for the Hough Transform,” Pattern Recognition Letters, vol. 22, pp. 813-823, 2001. [15] A. Bonci, T. Leo, and S. Longhi, “A Bayesian Approach to the Hough Transform for Line Detection,” IEEE Trans. Systems, Man, and Cybernetics, vol. 35, no. 6, pp. 945-955, Nov. 2005. [16] G. Lai and R.D. Figueiredo, “A Novel Algorithm for Edge Detection from Direction-Derived Statistics,” Proc. IEEE Int'l Symp. Circuits and Systems, vol. 5, pp. 37-40, May 2000. [17] R. Dahyot, N. Rea, A. Kokaram, and N. Kingsbury, “Inlier Modeling for Multimedia Data Analysis,” Proc. IEEE Int'l Workshop Multimedia Signal Processing, pp. 482-485, Sept. 2004. [18] R. Dahyot and S. Wilson, “Robust Scale Estimation for the Generalized Gaussian Probability Density Function,” Advances in Methodology and Statistics (Metodološki zvezki), vol. 3, no. 1, pp. 21-37, 2006. [19] P. Meer and B. Georgescu, “Edge Detection with Embedded Confidence,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, no. 12, pp.1351-1365, Dec. 2001. [20] N. Aggarwal and W.C. Karl, “Line Detection in Image through Regularized Hough Transform,” IEEE Trans. Image Processing, vol. 15, no. 3, pp. 582-591, Mar. 2006. [21] M.A. Fischler and R.C. Bolles, “Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography,” Comm. ACM, vol. 24, no. 6, pp. 381-395, 1981. [22] D. Walsh and A.E. Raftery, “Accurate and Efficient Curve Detection in Images: The Importance Sampling Hough Transform,” Pattern Recognition, vol. 35, pp. 1421-1431, 2002. [23] A. Bandera, J.P.B.J.M. Pérez-Lorenzo, and F. Sandoval, “Mean Shift Based Clustering of Hough Domain for Fast Line Segment Detection,” Pattern Recognition Letters, vol. 27, pp. 578-586, 2006. [24] R.S. Stephens, “Probabilistic Approach to the Hough Transform,” Image and Vision Computing J., vol. 9, no. 1, pp. 66-71, Feb. 1991. [25] P. Huber, Robust Statistics. John Wiley and Sons, 1981. [26] R.M. Steele and C. Jaynes, “Feature Uncertainty Arising from Covariant Image Noise,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp.1063-1069, 2005. [27] J. Princen, J. Illingworth, and J. Kittler, “A Formal Definition of the Hough Transform: Properties and Relationships,” J. Math. Imaging and Vision, vol. 1, no. 2, pp. 153-168, 1992. [28] S.J. Sheather, “Density Estimation,” Statistical Science, vol. 19, no. 4, pp. 588-597, 2004. [29] W.T. Freeman, “Steerable Filters and Local Analysis of Image Structure,” PhD dissertation, Massachusetts Inst. of Tech nology, 1992. [30] R. Dahyot, “Bayesian Classification for the Statistical Hough Transform,” Proc. IEEE Int'l Conf. Pattern Recognition, Dec. 2008.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8258708715438843, "perplexity": 4946.413725223631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737893676.56/warc/CC-MAIN-20151001221813-00091-ip-10-137-6-227.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/calculating-omega-as-a-function-of-time-for-a-flywheel.556868/
|
# Calculating omega as a function of time for a flywheel
1. Dec 4, 2011
### dannyR
hiya all, I've done a experiment which was hanging a mass from a light string wrapped around the axis of a flywheel. The mass was released and the flywheel began to rotate.
during calculations ive found out it would be great to have ω as a function of time and ive been stuck about how to get this.
could I do a force diagram using F=ma?, but then im unsure of the mass "m".
would I use the mass which is falling and add the moment of inertia of the flywheel or is this very wrong? :(.
or could I use energy stored such as
mgh=1/2Iω2+1/2mr2ω2+K
mgh, loss in potential energy of the falling mass
kinetic energy in the flywheel
kinetic energy in the falling mass
where K would be the frictional force i think it would be proportional to ωr
could i replace h the height the mass has fallen by using the F=ma bit i talked about above then replace h=1/2at2 then solve for t or ω??
ive thought about this alot and always been stopped by not knowing how to calculate something or use F=ma with moment of inertia stuff could someone please point me in the right direction?
Thanks alot Danny
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9132792353630066, "perplexity": 633.3197962672048}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247494125.62/warc/CC-MAIN-20190220003821-20190220025821-00595.warc.gz"}
|
https://blackbeltreview.wordpress.com/category/potpourri/
|
## How to Calculate Your Odds of Winning the Lottery
In Canada, there is a lottery called LOTTO MAX. Basically, you pick 7 unique numbers between 1 and 49.
To calculate your odds of winning the lottery, you need to find how many ways there are to pick 7 unique numbers, in any order, from a group of 49 numbers.
Here is the formula:
= $\frac{49!}{7! * (49-7)!)}$
= $\frac{(49 * 48 * 47 * ... * 3 * 2 * 1)}{(7 * 6 * 5 * 4 * 3 * 2 * 1)(42 * 41 * 40 * ... * 3 * 2 * 1)}$
= $85,900,584$
Therefore, the odds of winning Lotto Max is 1 in 85,900,584.
Good luck!
Advertisements
Published in: on December 9, 2016 at 2:23 pm Leave a Comment
## How to make Play Dough
1. Mix 2 cups of flour, ½ cup of salt, and 4 tablespoons of cream of tartar in the slow cooker.
2. Pour in 2 cups of water, 2 tablespoons of oil, and a bit of food coloring.
3. Place a damp towel under the lid, and cook the colorful mess on high for 45 to 60 minutes
4. Stirring often.
5. When done (the dough should easily form a ball), remove the mixture from the slow cooker, knead several times, and allow it to cool.
6. When stored properly, the playtime essential should last 3 to 4 months.
## How to make Crayons
1. Sort all of your old crayons by color family, and remove the paper wrapping.
2. Place the broken bits of a single color into the slow cooker, and heat on low until the pieces have melted.
3. Pour or ladle the melted crayons into silicone molds and place in a cool, dry location until they’ve cooled completely.
4. Then, break out the coloring sheets and set the little ones to work.
## How to make Candles
1. Simply grate or shred the wax into the slow cooker, and heat on low.
2. Then, prep your molds (old coffee cans or plastic containers work well) by oiling them with a bit of cooking spray.
3. Tie a fresh wick (available at craft stores) onto a pencil, suspend the pencil across the top of the mold, and tape the bottom of the wick in the center of the mold.
4. Once the wax has melted, pour it into the mold, and let it cool.
5. Once the wax has hardened, trim the wick and light it up.
## How to Roast Nuts
1. Grease the bottom of the cooker.
2. Place a cup of raw seeds or nuts into the pot.
3. Sprinkle in seasonings and toss to coat.
4. Cook for 3 to 4 hours on high, or until the seeds “snap” when tested.
5. Store in small plastic bags or glass jars for easy snacking.
## Room Deodorizer
1. Fill the appliance halfway with water
2. Mix in a cup of baking soda.
3. Heat on high, uncovered, for several hours or overnight to get rid of offensive odors.
4. To banish especially strong smells, add a few tablespoons of lemon juice to the water.
## Room Humidifier
1. Fill the pot about three-quarters full with hot water.
2. Cover with the lid.
3. Turn the appliance to its highest setting.
4. After 15 minutes, remove the lid and let the steam saturate your indoor air.
## How to make Hot Cocktail
1. Mix up all of the nonalcoholic components of your cocktail in the slow cooker, and keep it covered.
2. When you’re ready to serve, pour your spirits into a glass, then ladle in some of the heated mixture.
3. Be sure to keep the alcohol out of the appliance or it will cook off before you are ready to imbibe.
## How to make Soap
See recipe and instructions.
Published in: on December 9, 2016 at 2:07 am Leave a Comment
## Equipment
• A slow cooker
• A scale (this is important for making a soap that is not too harsh or too oily)
• Glass jars and bowls
• A stick blender
• Plastic cups (optional)
• A metal spoon
• A wooden spoon
• A spatula
• Soap molds (or an old cardboard box lined with parchment paper).
• A large bottle of white vinegar for neutralizing the lye mixture if it spills on anything.
• Gloves
• Eye Goggles
## Soap Recipe Ingredients
• 0.760 pounds water ( 12.16 ounces or 344.73 grams)
• 1 pound (16 ounces or 453.6 grams) coconut oil
• 1 pound (16 ounces or 453.6 grams) olive oil
• 0.303 pounds Lye (4.844 ounces or 137.339 grams)
• Up to 1 ounce of essential oils of choice (optional)
## Soap Recipe Instructions
1. Make sure that your work area is clean, ventilated and that there are no children nearby. This is not a good recipe to let children help with since Lye is caustic until mixed with water and oils.
2. Measure the oils in liquid form (by weight) and pour into the slow cooker.
3. Turn on high just until oils heat up and then reduce to low heat.
4. While oils are heating, carefully measure the lye and water separately. TIP: Use disposable plastic cups. They don’t weigh anything on the scale so they make measuring easy.
5. Keep 3 separate disposable plastic cups labeled:
1. Water
2. Lye
3. Oil
6. Carefully take the cups with the water and the lye outside or to a well ventilated area.
7. Pour the water into a quart size or larger glass jar.
8. With gloves and eye protection, slowly add the lye to the water. (Warning: DO NOT ADD THE WATER TO THE LYE!)
9. Stir carefully with a metal spoon, making sure not to let the liquid come in contact with your body directly.
10. As you stir, this will create a cloudy white mixture that gets really hot.
11. Let this mixture set for about 10 minutes to cool. It should become clear and not cloudy when it has cooled.
12. When the oils in the slow cooker have heated (to about 48-54 degrees Celsius or 120-130 degrees Fahrenheit), slowly pour in the water and lye mixture and stir.
13. Quickly rinse the container used for the water and lye mixture out in the sink. Rinse with white vinegar to make sure all Lye has been neutralized.
14. Use the metal or wooden spoon to stir the lye/water mixture into the oil mixture in the slow Cooker.
15. Once it is evenly mixed, use the stick blender to blend for about 4-5 minutes or until it is opaque and starting to thicken.
16. Cover and keep on low heat to thicken.
17. Set a timer for 15 minutes and check it every 15 minutes until it is ready. It will start to boil and bubble on the sides first.
18. After about 35-55 minutes (depending on your slow cooker) it will thicken enough that the entire surface is bubbly and the sides have collapsed in.
19. At this point, turn the heat off.
20. If you are going to use essential oils (e.g. lavender and orange) for scent, add them now.
21. Quickly and carefully spoon into molds (e.g. empty boxes lined with parchment paper).
22. Cover the molds with parchment paper and set in a cool, dry place.
23. After 24 hours, pop the soap out of the molds. For best results, let it set for a few more days so that it lasts longer.
For other awesome Slow Cooker Tips, see this article.
Published in: on December 9, 2016 at 2:05 am Leave a Comment
## Airport Runway Numbers Explained
Published in: on August 28, 2016 at 3:38 pm Leave a Comment
## Magic Bullet Beauty Tips
### Daily Facial Firming Toner
Published in: on August 26, 2016 at 7:28 am Leave a Comment
## Banana Bread Recipe for Magic Bullet
### Main Ingredients
• 4 Over-ripe Bananas
• 1/3 Cup Oil
• 1/3 Cup Sugar
• 2 Eggs
• 2 Tablespoon Milk
### Dry Ingredients
• 1-3/4 Cup Flour
• 1/2 Teaspoon Salt
• 1/2 Teaspoon Cinnamon (optional)
• 2 Teaspoon Baking Powder
• 1/4 Teaspoon Baking Soda
### Instructions
1. Preheat oven to 350F.
2. Grease and flour a loaf pan.
3. Put the bananas, oil, and sugar into the Magic Bullet cup and blend.
4. Add in eggs and milk. Blend for just a couple seconds, until eggs are beaten.
5. Add remaining dry ingredients into the mix.
6. Using the Magic Bullet, pulse to mix. Scraping the sides as necessary until everything is mixed together.
(Note: If you have the Magic Bullet Blender attachment, you can make the entire bread in it; if you don’t, just mix up all the wet ingredients and pour them into a bowl with the dry ingredients.)
7. Pour into prepared loaf pan and bake for 55 minutes, until toothpick comes out clean.
8. Served warm with butter and enjoy!
Published in: on August 23, 2016 at 4:20 pm Leave a Comment
## Basic Muffin Recipes with 4 Variations
This slideshow requires JavaScript.
Published in: on July 6, 2016 at 2:36 pm Leave a Comment
## Back to the Future in 2015
Published in: on October 16, 2015 at 7:07 pm Leave a Comment
## Sign Language
Published in: on June 9, 2015 at 11:47 pm Leave a Comment
## Famous Motivational Quotes to Start Your Day
1. Determine never to be idle. No person will have occasion to complain of the want of time who never loses any. It is wonderful how much may be done if we are always doing.
— Thomas Jefferson
2. Go for it now. The future is promised to no one.
— Wayne Dyer
3. If you are going to achieve excellence in big things, you develop the habit in little matters. Excellence is not an exception, it is a prevailing attitude.
— Charles R. Swindoll
4. If you don’t like something, change it. If you can’t change it, change your attitude. Don’t complain.
— Maya Angelou
5. If you start by promising what you don’t even have yet, you’ll lose your desire to work towards getting it.
— Paulo Coelho
6. If you want to conquer fear, don’t sit home and think about it. Go out and get busy.
— Dale Carnegie
7. It is the mark of an educated mind to be able to entertain a thought without accepting it.
— Aristotle
8. Keep your eyes on the stars, and your feet on the ground.
— Theodore Roosevelt
9. Knowing is not enough; we must apply. Willing is not enough; we must do.
— Johann Wolfgang von Goethe
10. Optimism is the faith that leads to achievement. Nothing can be done without hope and confidence.
— Helen Keller
11. Our greatest weakness lies in giving up. The most certain way to succeed is always to try just one more time.
— Thomas A. Edison
12. Out of clutter, find Simplicity. From discord, find Harmony. In the middle of difficulty lies Opportunity.
— Albert Einstein
13. Setting goals is the first step in turning the invisible into the visible.
— Tony Robbins
14. The key is to keep company only with people who uplift you, whose presence calls forth your best.
— Epictetus
15. The quality of a man’s life is in direct proportion to his commitment to excellence, regardless of his chosen field of endeavor.
— Vince Lombardi
16. The will to win, the desire to succeed, the urge to reach your full potential… these are the keys that will unlock the door to personal excellence.
— Confucius
17. We are all inventors, each sailing out on a voyage of discovery, guided each by a private chart, of which there is no duplicate. The world is all gates, all opportunities.
— Ralph Waldo Emerson
18. Well done is better than well said.
— Benjamin Franklin
19. What you get by achieving your goals is not as important as what you become by achieving your goals.
— Henry David Thoreau
20. While intent is the seed of manifestation, action is the water that nourishes the seed. Your actions must reflect your goals in order to achieve true success.
— Steve Maraboli
21. With the new day comes new strength and new thoughts.
— Eleanor Roosevelt
22. You are never too old to set another goal or to dream a new dream.
— C. S. Lewis
23. Your talent determines what you can do. Your motivation determines how much you are willing to do. Your attitude determines how well you do it.
— Lou Holtz
Published in: on May 31, 2015 at 7:40 pm Leave a Comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20053163170814514, "perplexity": 5042.214695900374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822739.16/warc/CC-MAIN-20171018032625-20171018052625-00581.warc.gz"}
|
https://skerritt.blog/this-simple-trick-will-save-you-hours-of-expanding-binomials/
|
Ever wanted to know how to expand (a+b)¹⁸⁷? Well now you can!
What is a Binomial Coefficient?
First, let’s start with a binomial. A binomial is a polynomial with two terms typically in the format (a+b)²
A binomial coefficient is raising a binomial to the power of n, like so (a+b)^n
We all remember from school that (a+b)² = a² + 2ab + b², but what is (a+b)⁸? This where the binomial formula comes in handy.
Binominal Theorem
The Binomial Theorem is the expected method to use for finding binomial coefficients because it is how a computer would compute it. The theorem is as follows:
Luckily for us, this formula is the same as another formula we’ve seen, according to here.
The combinations formula! Let’s try an example.
Example
What is the coefficient of x⁶ in (1+x)⁸?
Simply plug this into the formula like so
Something that may confuse people is, how do we work out what n and k are? Well, we have n objects overall and we want to choose k of them. For binomial / combinatorics sums it helps to think “(combinations of) X taken in sets of Y” where x > y for obvious reasons, in this case “(combinations of) 8 taken in sets of 6”.
Pascal’s Triangle
Pascal’s triangle is a triangle created by starting off with a 1, starting every line and ending every line with a 1 and adding the numbers above to make a new number; as seen in this gif.
No one could ever explain a maths topic as well as Numberphile, so here’s a Numberphile video on it:
Example
Let’s solve the example from earlier using Pascal’s triangle.
Pascal’s triangle always starts counting from 0, so to solve 8C6 (8 choose 6) we simply count 8 rows down, then 6 across. So the row here is the line of the number 1’s on the left hand side, and we start counting from 0. So the eigth row is the one that starts with 1, 8. Notice how the second inner column defines what row we’re on.
Now we count 6 across which is… 28. We just found the binomial coefficient using a super neat and easy to draw up triangle. Of course, the hardest part is adding together all the numbers and if the coefficient is large it may be easier to just use the Binomial theorem, but this method still exists and is useful if you’ve forgotten the binomial theorem.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8645629286766052, "perplexity": 593.3846311284693}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255165.2/warc/CC-MAIN-20190519201521-20190519223521-00340.warc.gz"}
|
http://link.springer.com/article/10.1007%2Fs00397-004-0394-3
|
Rheologica Acta
, Volume 44, Issue 2, pp 174–187
# Non Linear Rheology for Long Chain Branching characterization, comparison of two methodologies : Fourier Transform Rheology and Relaxation.
Original paper
DOI: 10.1007/s00397-004-0394-3
Cite this article as:
Fleury, G., Schlatter, G. & Muller, R. Rheol Acta (2004) 44: 174. doi:10.1007/s00397-004-0394-3
## Abstract
In this study we compare three rheological ways for Long Chain Branching (LCB) characterization of a broad variety of linear and branched polyethylene compounds. One method is based on dynamical spectrometry in the linear domain and uses the van Gurp Palmen plot. The two other methods are both based on non linear rheology (Fourier Transform Rheology (FTR) and chain orientation/relaxation experiments). FTR consists in the Fourier analysis of the shear stress signal due to large oscillatory shear strains. In the present work we focus on the third and the fifth harmonics of the shear stress response. Chain orientation/relaxation experiment consists in the analysis of the polymer relaxation after a large step strain obtained by squeeze flow. In this method, relaxation is measured by dynamical spectrometry and is characterized by two relaxation times related to LCB. All methods distinguish clearly the group of linear polyethylene from the group of branched polyethylene. However, FTR and Chain orientation/relaxation experiments show a better sensitivity than the van Gurp Palmen plot. Non linear experiments seem suitable to distinguish long branched polyethylene between themselves.
### Keywords
Long Chain Branching LCB Fourier Transform Rheology FTR Chain orientation
## Copyright information
© Springer-Verlag 2004
## Authors and Affiliations
1. 1.LIPHT (Laboratoire d’Ingénierie des Polymères pour les Hautes Technologies)ECPM (Ecole Européenne de Chimie Polymères et Matériaux de Strasbourg)StrasbourgFrance
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8410437107086182, "perplexity": 6502.035201561238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186895.51/warc/CC-MAIN-20170322212946-00052-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://www.gradesaver.com/a-christmas-carol/q-and-a/the-diction-for-the-ghosts-299615
|
# The diction for the ghosts
Why did Dickens mention the name of the ghosts as messanger, ghost, spirit, and phantom? Is there any specific reason for determining the diction? Thanks
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8913561105728149, "perplexity": 7582.166102285642}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866191.78/warc/CC-MAIN-20180624024705-20180624044705-00590.warc.gz"}
|
http://www.sadafsculinaryadventures.com/2014/03/maple-syrup-festival-at-bronte-creek.html
|
## Sunday, 23 March 2014
### Maple Syrup Festival at Bronte Creek Park
Did you know that it takes about 40 litres of sap to produce one litre of maple syrup ? Or the fact that North America is the only place that has both the sugar maple tree and the proper weather required to produce maple syrup ? These were just some of the facts that I learnt during the Maple Syrup Festival at Bronte Creek Park - a much loved Canadian springtime tradition .
We braved the windchill, snow and slush ( yes,we call this spring in Canada ) as tour guides dressed up in Late Victorian Era ( 1890s ) costumes demonstrated how maple trees are tapped to make maple syrup and maple sugar.
Sap Collection
Maple sugar being made in a Victorian style kitchen
We also learnt that all maple syrups are not the same.
The Gift Shop and the Candy Shanty had lots of maple goodies like taffy, maple fudge, maple cream cookies, maple mustard, maple jelly and of course, maple syrup.
Maple Candies
Maple Syrup
Maple Lollies are a must-have for Sofia and Aisha at every Maple Syrup Festival.
Aisha enjoying Maple Lolly
Another enjoyable part of the park was the Victorian Farmhouse where we got a glimpse of life in the 1890s and learnt interesting tidbits .
Victorian Farmhouse
All the walking and fresh air made us hungry and we decided to take the wagon ride to the Pancake House to have some warm pancakes with maple syrup.
Pancakes with maple syrup
After more than three hours the girls had still not had enough and asked us to take them to the children's barn at the other end of the park's day area where they spent another half an hour . Overall, it was a great experience for the entire family , worth going year after year.
1. definetly fun going every year lovely pictures :)
1. Thanks Akheela.............coming from you that's indeed a compliment.
Thanks for stopping by my blog. Your feedback, suggestions and queries are always welcome here.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9630163908004761, "perplexity": 6131.597049651221}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007301.29/warc/CC-MAIN-20141125155647-00081-ip-10-235-23-156.ec2.internal.warc.gz"}
|
https://cran.ma.ic.ac.uk/web/packages/RWsearch/vignettes/RWsearch-2-Display-Download-Documentation.html
|
## Introduction
RWsearch stands for « Search in R packages, task views, CRAN and in the Web ».
This vignette introduces the following features cited in the README file:
.5. Display the results as a list or as a table in the console or in the browser and save them as txt, md, html, tex or pdf files.
.6. In one instruction, download in one directory the whole documentation and the tar.gz files related to one or several packages. This is the perfect tool to read the documentation off-line and study the source code of a package.
## (Down)Load crandb and extract packages
In this vignette, crandb must be loaded in .GlobalEnv. This can be done either by downloading a fresh version of crandb or by loading a file. Read Vignette 1 and the details about crandb_down(). Here, we use the small file of 110 packages saved in RWsearch/data.
crandb_down()
or
crandb_load(system.file("data", "zcrandb.rda", package = "RWsearch"))
# $newfile # crandb loaded. 110 packages listed between 2011-04-13 and 2021-06-01" vec <- s_crandb(find, select = "P") ; vec # [1] "findR" "packagefinder" "wfindr" lst <- s_crandb_list(thermodynamic, "chemical reaction", select = "PT") ; lst #$thermodynamic
# [1] "aiRthermo" "CHNOSZ"
#
# \$chemical reaction
# [1] "bioPN" "RxnSim" "sbioPN"
ls()
# [1] "crandb" "lst" "vec"
## Explore the selected packages
RWsearch can print the information related to the selected packages in the console, in the pager, in txt, md, tex and pdf files and in html pages.
The source of information can be R itself, crandb or your local CRAN.
The information provided by R or your local CRAN is usually in html or pdf format. The information extracted from crandb can be presented in a table or in a classical text with sections and sub-sections.
Source Format Output Functions
CRAN html browser p_page(), p_archive(), p_check(), e_check()
R html browser p_html(), p_html2(), p_vig(), p_vig_all()
CRAN pdf pdf in browser p_pdfweb()
R pdf pdf viewer p_pdf()
crandb table console p_table(), p_table2(), p_table5(), p_table7()
crandb table browser p_display(), p_display5(), p_display7()
crandb table pdf file table2pdf(), p_table2pdf()
p_table3pdf(), p_table5pdf(), p_table7pdf()
crandb text txt file, pager p_text()
crandb text md file, pager p_text2md()
crandb text tex + pdf files p_text2pdf()
## HTML and PDF formats
A simple but useful feature is to launch the html pages directly from R.
Local pdf pages are opened in the pdf viewer.
Remote pdf pages are opened in the pdf application provided by the browser.
p_page() opens the yourCRAN/packagespkg/index.html pages
p_archive() opens the https://cran.r-project.org/src/contrib/Archive/pkg pages.
p_check() opens the yourCRAN/checks/check.results.pkg.html pages.
e_check() opens the yourCRAN/checks/check.results.emailadresse.html pages (the check page of each maintainer identified by the maintainer email addresses)
p_html() opens the local help pages of each packages. The urls start by http://127.0.0.1:.
p_html2() opens the local help pages of each packages. The urls start by file:///C:/ (on Windows).
p_vig() opens one html page that lists the vignettes of the selected packages. The url starts by http://127.0.0.1:.
p_vig_all() opens one html page that lists the vignettes of all installed packages. This can be a huge list. The url starts by http://127.0.0.1:.
p_pdf() opens the manual(s) of the selected packages in the pdf viewer. If the manuals do not exist, they are created on the fly by Texlive or Miktex.
p_pdfweb() opens the pdf file yourCRAN/packages/pkg/pkg.pdf in the pdf application provided by the browser.
## Table format
The generic function is p_table() which has an argument columns to select any (combination of) column(s) in crandb. The default value prints 3 columns (Package name + Title + Description). Other predefined functions print 2 columns (Package name + Title), 5 columns (3 columns + Author + Maintainer), 7 columns (5 columns + Version + Published).
In the console, the width is limited and the most interesting function is p_table2(). It displays the Package name and package Title.
p_display7(), p_table7pdf() and their variants rely on the automatic scaling tools of html and pdf files to display more columns in a readable manner.
### p_table2() prints in the console
p_table2(vec)
# Package Title
# 21 findR Find Code Snippets, R Scripts, R Markdown, PDF and Text Files with Pattern Matching
# 28 packagefinder Comfortable Search for R Packages on CRAN
# 49 wfindr Crossword, Scrabble and Anagram Solver
### p_display7() opens the browser
p_display7(vec)
### p_table7pdf() prints in a pdf file (table style)
p_table7pdf(vec)
## Text format
More information can be printed with texts in classical format than in tables as the page width is usually not a constraint.
RWsearch has 3 functions: p_text(), p_text2md(), p_text2pdf() to produce files in classical UTF-8 text, UTF-8 markdown and pdf format. The level of information extracted from crandb is controlled by the arguments beforetext, f_maintext, aftertext. Any column of crandb can be selected as well as the links to the main files in CRAN. An internet connexion is required as many queries are sent to CRAN to find the NEWS and README urls.
### p_text() prints in a txt file
p_text(lst, editor = TRUE)
[1] "pkgstext_thermodynamic.txt" "pkgstext_chemicalreaction.txt"
The following text appears in the “pkgstext_thermodynamic.txt” file:
# == aiRthermo ==
# aiRthermo: Atmospheric Thermodynamics and Visualization
# Deals with many computations related to the thermodynamics of atmospheric processes. It includes many functions designed to consider the density of air with varying degrees of water vapour in it, saturation pressures and mixing ratios, conversion of moisture indices, computation of atmospheric states of parcels subject to dry or pseudoadiabatic vertical evolutions and atmospheric instability indices that are routinely used for operational weather forecasts or meteorological diagnostics.
# Depends: NA
# Imports: NA
# Suggests: NA
# Version: 1.2.1
# Published: 2018-09-16
# Maintainer: Santos J. González-Rojí <[email protected]>
# https://cran.univ-paris1.fr/web/packages/aiRthermo/index.html
# https://cran.univ-paris1.fr/web/packages/aiRthermo/aiRthermo.pdf
# https://cran.univ-paris1.fr/web/packages/aiRthermo/news/news.html
# == CHNOSZ ==
# CHNOSZ: Thermodynamic Calculations and Diagrams for Geochemistry
# An integrated set of tools for thermodynamic calculations in aqueous geochemistry and geobiochemistry. Functions are provided for writing balanced reactions to form species from user-selected basis species and for calculating the standard molal properties of species and reactions, including the standard Gibbs energy and equilibrium constant. Calculations of the non-equilibrium chemical affinity and equilibrium chemical activity of species can be portrayed on diagrams as a function of temperature, pressure, or activity of basis species; in two dimensions, this gives a maximum affinity or predominance diagram. The diagrams have formatted chemical formulas and axis labels, and water stability limits can be added to Eh-pH, oxygen fugacity- temperature, and other diagrams with a redox variable. The package has been developed to handle common calculations in aqueous geochemistry, such as solubility due to complexation of metal ions, mineral buffers of redox or pH, and changing the basis species across a diagram ("mosaic diagrams"). CHNOSZ also has unique capabilities for comparing the compositional and thermodynamic properties of different proteins.
# Depends: R (>= 3.1.0)
# Imports: grDevices, graphics, stats, utils
# Suggests: limSolve, testthat, knitr, rmarkdown, tufte
# Version: 1.2.0
# Published: 2019-02-10
# Maintainer: Jeffrey Dick <[email protected]>
# https://cran.univ-paris1.fr/web/packages/CHNOSZ/index.html
# https://cran.univ-paris1.fr/web/packages/CHNOSZ/CHNOSZ.pdf
# https://cran.univ-paris1.fr/web/packages/CHNOSZ/NEWS
# https://cran.univ-paris1.fr/web/packages/CHNOSZ/vignettes/anintro.html
# https://cran.univ-paris1.fr/web/packages/CHNOSZ/vignettes/eos-regress.html
# https://cran.univ-paris1.fr/web/packages/CHNOSZ/vignettes/obigt.html
# https://cran.univ-paris1.fr/web/packages/CHNOSZ/vignettes/equilibrium.pdf
# https://cran.univ-paris1.fr/web/packages/CHNOSZ/vignettes/hotspring.pdf
### p_text2md() prints in a txt file with md extension
p_text2md(lst, editor = TRUE)
[1] "pkgstext_thermodynamic.md" "pkgstext_chemicalreaction.md"
The first part of the “pkgstext_thermodynamic.md” file is:
# ---
# title: TITLE
# author: AUTHOR
# date: 2019-02-24
# output:
# pdf_document:
# keep_tex: false
# toc: false
# number_sections: true
# fontsize: 10pt
# papersize: a4paper
# geometry: margin=1in
# ---
#
#
# # aiRthermo
# aiRthermo: Atmospheric Thermodynamics and Visualization
# Deals with many computations related to the thermodynamics of atmospheric processes. It includes many functions designed to consider the density of air with varying degrees of water vapour in it, saturation pressures and mixing ratios, conversion of moisture indices, computation of atmospheric states of parcels subject to dry or pseudoadiabatic vertical evolutions and atmospheric instability indices that are routinely used for operational weather forecasts or meteorological diagnostics.
# Depends: NA
# Imports: NA
# Suggests: NA
# Version: 1.2.1
# Published: 2018-09-16
# Maintainer: Santos J. González-Rojí <[email protected]>
# https://cran.univ-paris1.fr/web/packages/aiRthermo/index.html
# https://cran.univ-paris1.fr/web/packages/aiRthermo/aiRthermo.pdf
# https://cran.univ-paris1.fr/web/packages/aiRthermo/news/news.html
### p_text2pdf() prints in a pdf file (article style)
p_text2pdf(lst)
[1] "pkgstext_thermodynamic.tex" "pkgstext_chemicalreaction.tex"
By default, pdf files pkgstext_thermodynamic.pdf and pkgstext_chemicalreaction.pdf are automatically generated in the current directory rom the tex files. The beginning of the “pkgstext_thermodynamic.pdf” file looks like:
p_down() is a smart function designed for people who need to work offline. It downloads all R package documentation with just one line of code. The pictures speak by themselves: 35 files were downloaded in 11 seconds (on one SSD disk and with a standard ADSL line). Package vectors are downloaded in the current directory. Package lists are downloaded in sub-directories.
p_down(vec, NEWS = TRUE, ChangeLog = TRUE, targz = TRUE)
p_down(lst, NEWS = TRUE, ChangeLog = TRUE, targz = TRUE)
p_down0() has been recently added to download one or two documents or download the tar.gz package and decompress it on the fly.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5514153242111206, "perplexity": 13159.251043149477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00344.warc.gz"}
|
https://docs.uabgrid.uab.edu/w/index.php?title=MATLAB&oldid=4685&diff=prev
|
# MATLAB
(Difference between revisions)
Matlab License renewed 2016 -2017 To Update your MATLAB license click Help > Licensing > Update Current License To Use MATLAB with the SLURM scheduler on Cheaha please click the link below
NOTE: Attention Mac OSX 10.7 Users
Java Update 17 - does not allow MATLAB (2012a,b and 2013a) to be installed on OSX
Please do not update JAVA if you wish to install new versions of MATLAB.
MATLAB (matrix laboratory) is a numerical computing environment and fourth-generation programming language. Developed by Mathworks, MATLAB allows matrix manipulations, plotting of functions and data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other languages, including C, C++, and Fortran. An additional package, Simulink, adds graphical multi-domain simulation and Model-Based Design for dynamic and embedded systems.
MATLAB can be used on personal computers and powerful server systems, including the Cheaha compute cluster. With the addition of the Parallel Computing Toolbox, the language can be extended with parallel implementations for common computational functions, including for-loop unrolling. Additionally this toolbox supports offloading computationally intensive workloads to Cheaha the campus compute cluster.
In January 2011, UAB acquired a site license for MATLAB that allows faculty, staff, post-docs, and graduate students to use MATLAB, Simulink, and 42 toolboxes (including the parallel toolbox) for research activities on campus and personal systems. Additionally, from January 2012 MATLAB is available to students on campus and personal computer systems.
## MATLAB Versions
Mathworks has two annual releases of Matlab: the "a" release in the spring and the "b" release in the fall. Each release gets tagged with the current year and "a" or "b". For example, "Matlab 2013a" is the spring release for 2013.
If you are using Matlab in an isolated environment like on your laptop or desktop, you can generally install the most recent release available from Mathworks.
If you plan to uses specific features of Matlab, however, like running computations on the Cheaha cluster or using a network install. You should install our recommended release of Matlab that we know works with our services.
'The current recommended release is Matlab 2013a.'
In UAB IT Research Computing, we update our services to work with the latest Matlab release a month or so after the general release of that product. This gives us time to try out the latest release, get feedback from other early adopters, and update services like the Distributed Computing Toolbox, license server and our documentation.
Note: you can always install whichever Matlab release you need and that is still available from Mathworks. Different versions of Matlab are always installed side-by side. Depending on your science domain, you may need to select certain releases in order to access specific features. However, not all of these releases may be supported by our compute cluster or network license manager.
## MATLAB on the Desktop
Using Mathworks software available under the UAB campus license on your computer involves download and install steps common to all software packages and an authorization step that grants you the rights to use the software under the campus agreement.
NOTE:These steps are common to all installation and activation scenarios and are detailed in Downloading and Installing MATLAB.
1. Create an account at the Mathworks site using your campus @uab.edu email address. Please do not share your mathworks account username or password with anyone as this account will be associated with the UAB TAH license.
2. Request an activation key from the UAB software library page. Please make sure to request the appropriate key (Faculty/staff or student) as the software are on different licenses.
3. Associate your Mathworks account with the campus-wide MATLAB license using your activation key.
5. Activate the software using the activation scenario that best suits your particular needs.
### Updating MATLAB on Desktop
If you have been running MATLAB on your desktop during 2011, one can click 'Help', then 'Licensing', and finally 'Update Current Licenses'. This will remedy the license expiration message without having to update to a new copy of MATLAB.
### Installation Help
MATLAB is a self-supported application at UAB. A UAB MATLAB users peer support forum is available. Subscription options are described below in MATLAB Support.
## MATLAB on Cheaha (compute cluster)
MATLAB is pre-installed on the Cheaha research computing system. This allows users to run MATLAB directly on the cluster without any need to install software. MATLAB jobs can also be submitted to Cheaha directly from your desktop, however, this requires additional configuration described in MatLab DCS.
### Integration with Desktop MATLAB
Accessing the additional compute power of Cheaha from your desktop MATLAB install is recommended for most users because it combines the familiar MATLAB user experience with cluster computing power. However, additional steps are required to configure a desktop MATLAB installation to access worker nodes on the Cheaha cluster via the Distributed Computing Server (DCS) platform. Please see MatLab DCS for configuration information.
### Using Batch Submit from the Desktop Instead of matlabpool Jobs
It is not possible to use matlabpool jobs on the cluster from your desktop due to firewall restrictions. Instead, desktop MATLAB users should use the batch submit options described in the MatLab DCS configuration to submit their jobs to the cluster. Matlabpool jobs are possible when running MATLAB directly on the cluster as described in matlabpool from the head node .
### Direct Use on the Cluster
Using MATLAB directly on the cluster is recommended only for people comfortable accessing systems via a command line environment (eg. secure shell SSH). SSH access to Cheaha supports X Windows and VNC sessions for displaying a full graphical MATLAB development environment on client desktops with an X Windows servers or VNC client applications installed. For more information please see MatLab CLI. Matlabpool jobs are possible when running MATLAB directly in this environment as described in matlabpool from the head node .
## Advanced Install Scenarios
This information is helpful for people interested in the many ways in which MATLAB can be installed. A normal end-user installing MATLAB for themselves on a desktop of laptop computer should follow the #MATLAB on the Desktop instructions above. The following information is of most interest to IT or computer lab administrators who maintain MATLAB installs for many people on many computers.
### User Installation and Activation Scenarios
1. Installation and activation with Designated Computer License - This option is recommended for mobile computing systems which may or may not be on the UAB network when MATLAB is being used. This install type authorizes an individual computer to run MATLAB, allowing MATLAB to run regardless of where the computer is located. (This is the only option available if you want to use your MATLAB on your computer when you are not physically present at UAB)
2. Installation and Activation with Network License - This is the recommended install when MATLAB will be used on computers that remain connected to the campus network. This installation requires MatLab software to be installed on your computer and provides a simple 2-line file to activate the software. This option is highly recommend for UAB desktops.
NOTE: Most on-campus users are encouraged to use the Installation and Activation with Network License option for activation unless there are special circumstances that require the alternative activation scenarios.
### Network Concurrent/Lab admin Installation and Activation Scenarios
1. Matlab Network Concurrent User Install - This installation is only recommended for system administrators who already manage a lab or departmental installation of MATLAB and who would like to continue to provide this service for their user community. This install type may also be practical if there are special additional license needs that will apply to multiple computers running MATLAB. Note, all MATLAB toolboxes actively used at UAB are currently covered under the UAB campus license.
## On-line Tutorials and Learning Resources
• Getting Started
• Recorded Webinars, select a topic and complete the request form.
• Interactive Tutorials for Students and Faculty
• Example Code, News, Blogs, Teaching Materials
## MATLAB Support / Mailing List
As with any application or computer language, learning to use MATLAB to analyze data or to develop or modify MATLAB applications is an individual responsibility. There is ample application documentation available from the Mathworks website, potential outreach to colleagues who also use MATLAB, and options for consultation with Mathworks. Mathworks also host on-campus training seminars several times a year and provides many on-line learning tutorials.
Installation support for MATLAB at UAB is provided by your local IT support organization and the Docs wiki.
### Mathworks Website
Your first and best option for application-specific questions on MATLAB is to refer to the on-line MATLAB documentation. The Mathworks site also provides a a support matrix and an on-line knowledge base.
### UAB MATLAB Wiki
The MATLAB page on the Docs wiki is the starting point for installing MATLAB at UAB and, optionally, configuring it to use cluster computing. All users are encouraged to contribute to the MATLAB knowledge in this wiki, especially if you see areas where improvements are needed. Remember, this knowledge base is only as good as the people who contribute to it.
Contributing to the wiki is as easy as clicking the login link on the top-right of the page and signing in with your UAB BlazerID. If you are unsure about making an edit, you can make suggestions for improvement on the page's Discussion tab or discuss the proposed improvement in the MATLAB user group.
### UAB MATLAB User Group
At UAB, MATLAB installation support is provided by your local IT support group. Support for application specific questions is available from peers in your research group. We realize that some people are not as familiar with MATLAB as others. For this reason, we have established a MATLAB user forum (mailing list) where users of MATLAB at UAB can help answer each others questions.
This is a network of volunteers sharing their knowledge with peers. You are encouraged to reach out to this community for questions on using MATLAB by
Archives of MATLAB user group discussions are available on-line at https://vo.uabgrid.uab.edu/sympa/arc/matlab-user. You may find your question is already answered in these archives.
### UAB MATLAB announce mailing list
To receive information about UAB's MATLAB license and announcements please subscribe to the matlab-annc mailing list by
## UAB Mathworks Site License
UAB has acquired a university wide site license for MATLAB and Simulink software. This license includes all Mathworks Inc. products in use at UAB, with the exception of the Distributed Computing Server (DCS) which must be licensed separately. This new site license also makes available several new toolboxes and blocksets not previously licensed by UAB.
This site license is known as the Mathworks Inc. Total Academic Headcount (TAH) license or Mathworks TAH. As of January 1, 2012, UAB has two TAH licenses. First, the TAH campus with license number 678600 is the same TAH license which was in operations during 2011 and is for use by all UAB full-time faculty and staff. Second, the TAH student with license number 731720 is for use by UAB students, where graduate and professional students at UAB with funding or working on UAB research projects should use the TAH campus license.
Mathworks TAH license -- either campus or student -- will make it easier for everyone in the UAB community to use MATLAB, MATLAB Toolboxes (extensions) and Simulink software. Specifically, it authorizes use of MATLAB on university owned machines for all faculty, staff and students. Faculty and staff are also entitled to install the software (TAH campus - 678600) on personally owned computers. Students are authorized to install TAH student (TAH student - 731730) are authorized to install this software on their personal computer. It is important each authorized user of either TAH license to use the the authentication key corresponding to their authorized TAH license. That is, authorized users of TAH campus (678600) should use the authentication key obtained from http://www.uab.edu/it/software/, after selecting Mathworks/Mathlab, corresponding to the Faculty/staff group. Similarly, authorized users for TAH student (731730) should use the authentication key for the Students group. For questions on which authentication key to use or with help on installing MATHLAB software on your computer, please contact [email protected] or post a question the list serve [email protected].
The TAH allows unlimited use MATLAB, Simlink and the 48 MATLAB Toolboxes, block sets and complier in both research and teaching activities. Faculty, staff and students can install the software on computers located off-campus, however, students may only use Mathworks software on UAB owned computers computers located on-campus.
UAB was the first university in Alabama to implement a Mathworks TAH license.
## Parallel Computing Extensions
MATLAB language extensions to support parallel processing are available via the Parallel Computing Toolbox. This is one of the 42 toolboxes available under the UAB TAH license. The Parallel Computing Toolbox enables MATLAB to make use of the multi-core processors found on many computers in order to speed the execution of code sections that can execute in parallel. This toolbox supports the use of up to 8 cores on a single computer systems through the use of worker threads the spread the execution of code across multiple cores.
Additional parallelism can be supported by adding more worker threads via a secondary software platform known as the Distributed Computing Server (DCS). The DCS runs on a compute cluster and can provide many more worker threads to increase parallelism. UAB IT Research Computing has licensed a 128 worker DCS installation for the Cheaha compute cluster. The Parallel Computing Toolbox can be configured to access this license from desktop MATLAB installations. Please see MatLab DCS for configuration details.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1760755479335785, "perplexity": 4500.997332415598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056297.61/warc/CC-MAIN-20210918032926-20210918062926-00274.warc.gz"}
|
http://mathhelpforum.com/calculus/280935-limits.html
|
1. ## Limits
I'm learning about limits and I have realised that if you graph the limit function, it seems to never exist where x = a i.e. where x = the number you're trying to calculate the limit at. However in the same tutorial it mentions that "many of the functions don't exist at x= a". This to me seems wrong, to me it seems functions never exist at x = a. Am I correct?
2. ## Re: Limits
No. $$\lim_{x \to 4} (x-2) = 2$$
Indeed, the definition of the continuity of a function at a given point is that the function is equal to the limit of the function.
3. ## Re: Limits
It depends on the function whose limit you are analyzing and whether that function is actually defined at x = a. For example:
Suppose you are given $\displaystyle f(x)=x^2$
$\displaystyle \lim_{x\to1}\left(f(x)\right)=1$
This comes directly from the fact that $\displaystyle f(1)=1$. Now consider:
$\displaystyle g(x)=\frac{x^2-2x+1}{x-1}$
Here, we would find:
$\displaystyle \lim_{x\to1}\left(g(x)\right)=0$
Even though $\displaystyle g(1)$ is undefined.
4. ## Re: Limits
The examples you are given have that property because they are more interesting. But continuous functions which, by definition, have the property that $\displaystyle \lim_{x\to a} f(x)= f(a)$ are the most useful functions.
5. ## Re: Limits
I think where my confusion arose from was the fact that I was introduced to limits through derivatives. So I thought all limits were of the derivative form. Looks like the derivative is just a special case of limits.
Yes.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9888389706611633, "perplexity": 409.2301350613483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583745010.63/warc/CC-MAIN-20190121005305-20190121031305-00501.warc.gz"}
|
https://www.ideals.illinois.edu/handle/2142/17362/browse?type=title
|
# Browse Dissertations and Theses - Statistics by Title
• (1989)
This work deals with a decision-theoretic evaluation of p-value rules. A test statistic is judged on the behavior of its p-value with the loss function being an increasing function G of the p-value.
application/pdf
PDF (1MB)
• (1959)
application/pdf
PDF (1MB)
• (1996)
The identifiability and estimability of the parameters for the Unified Cognitive/IRT Model are studies. A calibration procedure for the Unified Model is then proposed. This procedure uses the marginal maximum likelihood ...
application/pdf
PDF (4MB)
• (2010-08-20)
The statistical inference based on the ordinary least squares regression is sub-optimal when the distributions are skewed or when the quantity of interest is the upper or lower tail of the distributions. For example, the ...
application/pdf
PDF (1MB)
• (2000)
Using results from He & Shao (2000), a proof of the consistency and asymptotic normality of item parameter estimates obtained from the Marginal Maximum Likelihood Estimation (Bock & Lieberman, 1970) procedure as both the ...
application/pdf
PDF (5MB)
• (1989)
In many areas of application of statistics one has a relevent parametric family of densities and wishes to estimate the density from a random sample. In such cases one can use the family to generate an estimator. We fix a ...
application/pdf
PDF (4MB)
• (1967)
application/pdf
PDF (1MB)
• (1989)
Many authors, for example, Fisher (1950), Pearson (1938), Birnbaum (1954), Good (1955), Littell and Folks (1971, 1973), Berk and Cohen (1979), and Koziol, Perlman, and Rasmussen (1988), have studied the problem of combining ...
application/pdf
PDF (7MB)
• (2012-02-01)
Bayesian inference provides a flexible way of combiningg data with prior information. However, quantile regression is not equipped with a parametric likelihood, and therefore, Bayesian inference for quantile regression ...
application/pdf
PDF (446kB)
• (2002)
This thesis presents a progression from theory development to real-data application. Chapter 1 gives a literature review of other psychometric models for formative assessment, or cognitive diagnosis models, as an introduction ...
application/pdf
PDF (9MB)
• (1993)
We consider the problem of regressing a dichotomous response variable on a predictor variable. Our interest is in modelling the probability of occurrence of the response as a function of the predictor variable, and in ...
application/pdf
PDF (6MB)
• (2011-05-25)
The latent class model (LCM) is a statistical method that introduces a set of latent categorical variables. The main advantage of LCM is that conditional on latent variables, the manifest variables are mutually independent ...
application/pdf
PDF (5MB)
• (2011-05-25)
Quantile regression, as a supplement to the mean regression, is often used when a comprehensive relationship between the response variable and the explanatory variables is desired. The traditional frequentists’ approach ...
application/pdf
PDF (374kB)
• (2007)
Clustering and classification have been important tools to address a broad range of problems in fields such as image analysis, genomics, and many other areas. Basically, these clustering problems can be simplified as two ...
application/pdf
PDF (2MB)
• (2000)
To effectively build a regression model with a large number of covariates is no easy task. We consider using dimension reduction before building a parametric or spline model. The dimension reduction procedure is based on ...
application/pdf
PDF (4MB)
• (2000)
Motivated by consulting in infrastructure studies, we consider the estimation and inference for regression models where the response variable is bounded or censored. In these conditions, least squares methods are not ...
application/pdf
PDF (3MB)
• (2006)
The classical approaches to clustering are hierarchical and k-means. They are popular in practice. However, they can not address the issue of determining the number of clusters within the data. In this dissertation, we ...
application/pdf
PDF (2MB)
• (1991)
Consider the model $y\sb{lj} = \mu\sb{l}(t\sb{j})$ + $\varepsilon\sb{lj}$, $l = 1,..,m$ and $j = 1,..,n,$ where $\varepsilon\sb{lj}$ are independent mean zero finite variance random variables. Under the above setting we ...
application/pdf
PDF (5MB)
• (1990)
Two-stage Bayes procedures, also known as Bayes double sample procedures, for estimating the mean of exponential family distributions are given by Cohen and Sackrowitz (1984). In their study, they develop double sample ...
application/pdf
PDF (2MB)
• (2004)
The flexible forms of nonparametric IRT models make test equating more challenging. Though linear equating under parametric IRT models is obvious and appropriate, it might not be appropriate for nonparametric models. Two ...
application/pdf
PDF (3MB)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.81325763463974, "perplexity": 1675.815670708438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824225.41/warc/CC-MAIN-20171020135519-20171020155519-00086.warc.gz"}
|
http://webmasters.stackexchange.com/questions/53249/does-updating-site-too-frequently-hurt-seo-rankings/53250
|
# Does updating site too frequently hurt SEO rankings?
I recently updated my site, like 3-4 times in 2 months. At first everything was good, (i.e., Google was updating my pages' SERP content in 2-3 days). But now I did kind of a major change, I switched to CodeIgniter. My URLs are nearly the same as before, but my page titles and meta descriptions changed in terms of order. They were starting with brand name, and now they are starting with keywords. Now for one week Google has not updated my site.
Do you think my site is Sandboxed?
Is it bad to update too frequently?
Also, my site is hosted on a free host called 000Webhost, so it is sometimes down. Could that affect ranking very much? My site also has no backlinks whatsoever, so I don't think Google visits my site very often, and Google Webmaster Tools says so too.
-
"URLs are nearly the same" - nearly? So, they are different? Have you redirected old to new? "page titles and meta descriptions changed in terms of order" - just the order in which they appear in the source of the page? – w3d Sep 19 '13 at 23:53
You are right, sorry I forgot to add that. The only change was when listing the projects, it was like ./gallery/project-name , then it is ./projects/project-name. Of course I 301-redirected. But the real problem is, even the homepage has not been updated in Google, for a week. – halilpazarlama Sep 20 '13 at 1:24
Just an FYI… I used 000WebHost in the past, and one day without warning they deleted my site completely - everything from database to files to DNS settings completely deleted. Make sure you have a good backup and consider moving to a better (paid) webhost – bungeshea Sep 25 '13 at 22:53
@bungeshea you are right, of course I'm prepared for that. I want to make my home computer a server, but I cannot figure out the port-forwarding stuff with my router. As soon as I do that, I'm off 000webhost :) – halilpazarlama Sep 26 '13 at 9:36
Also, my site is hosted on a free host called 000Webhost, so it is sometimes down. Could that affect ranking very much? ...I don't think Google visits my site very often, and Google Webmaster Tools says so too.
This is a pretty significant issue, which you indicated before here as well. If Google can't reach your site, it's not going to be able to index changes to it and might slow down crawling it. You really should consider moving your site to a more reliable web host.
Is it bad to update too frequently?
Generally, updating content frequently will result in your website getting crawled more frequently, so it's definitely not a bad thing to do.
My URLs are nearly the same as before...
If your URLs change, you'll need to 301 redirect them to ones with matching content, and get the new URLs indexed by submitting them in a sitemap.
My page titles and meta descriptions changed in terms of order. They were starting with brand name, and now they are starting with keywords.
That's not necessarily a bad thing either, providing that you're not repeating keywords (i.e., keyword stuffing).
My site also has no backlinks whatsoever
Obtaining some backlinks from authoritative relevant sites would certainly help, but the site needs to be reachable on a consistent basis too.
Do you think my site is Sandboxed?
It's likely not experiencing a Sandbox effect based on what you've described - it probably just hasn't been crawlable, relearned, and updated. I would suggest first fixing your web hosting, doing appropriate 301 redirection, resubmitting new URLs in your sitemap, and finally working on getting some quality backlinks.
-
Thanks for your answer, dan! I did the necessary 301's, and updated the new sitemap. For the keyword stuffing part, I'm not sure but I think they are ok. I mean they are not the form "keyword1, keyword2, keyword3" , but in a meaningful way, like "you can see our keyword1 and keyword2 in this keyword3 project" etc. for the backlinks part, I have no idea how to get them. A lot of site-directory sites already give me backlinks, but I don't think they count. Should I go to forums and write backlinks myself? Or social media ? – halilpazarlama Sep 20 '13 at 1:46
No problem, hope it helps get your site back on track. The keywords are probably fine. I would find sites that are relevant to yours, and ranking fairly well, and try to get them to mention and link to yours and vice versa. Social media might provide more signals and help bring people to your site too. Have a look at some of the Questions here under Frequent, like this one. Good luck! – dan Sep 20 '13 at 1:55
Updating very frequently is always a good thing and your site get crawled frequently but updation should be done in content not with URL and title. If title is not working for your site then you can change it but not to frequently.
-
I guess so. Did anybody have an experience with changing the title? Does it hurt? Most importantly, in long-run? What I did is actually changing the title from "company name - city name" to "city name - industry name - company name" . – halilpazarlama Sep 20 '13 at 10:02
Changing title or description is required when that particular page is not getting rank for target keyword then changing their title wont effect you it can reflect some good result but when you change the title of ranking page and its cached also then that page can pushed down. So change the title of those pages which are not ranking. – Justice Brand Sep 27 '13 at 7:20
I changed the homepage's title a lot, but now I'm steady and I think it worked out for me. Thanks for all the help ! – halilpazarlama Sep 27 '13 at 14:31
No, Updating website too frequently will not hurt your SEO.
Keyword in title is not more meaningful. As title is second most important on–page SEO element. I will suggest you to read below guideline for creating unique title.
http://moz.com/learn/seo/title-tag
According to Webmaster Guidelines, crawling & indexing of the page is depends on If-Modified-Since HTTP header. This feature allows your web server to tell Google whether your content has changed since we last crawled your site. Supporting this feature saves you bandwidth and overhead.
Is it bad to update too frequently?
No, There are lots of website on the internet, which are getting updates in time seconds like amazon.com.
Updating a website can be a good thing for you, If you follow basic rules of Webmaster guidelines. Google always like New and fresh updates of the website.
If your website will down when Google is going to crawl, then Google is re-scheduling the Crawl of the Website after 24 hours. If the downtime is more than 24 hours or repeating more time then It will definitely hurt your Website SERP.
-
Updating pages can be a problem or a solution.
Small changes are usually not a problem, but think about this: if you change a page that displays content about "football" and now displays content about "monkeys", as an example, of course rankings are gonna drop, you're completely changing the subject of a page.
If that's not the case, and you're simply updating pages with different but related content, make sure you're doing it for the better. Better content, better structured urls etc.
Also, if urls are gonna change, make sure you properly forward them with 301 redirects.
Your hosting being down is not gonna be helpful. But it also depends on how often. Many companies offer about 99% up time, so right there you do know it is not always up. But down time is definitely gonna hurt you.
What else you can do:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16093678772449493, "perplexity": 2231.8383620191767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422120842874.46/warc/CC-MAIN-20150124173402-00132-ip-10-180-212-252.ec2.internal.warc.gz"}
|
https://matplotlib.org/3.0.2/api/_as_gen/matplotlib.axes.Axes.cohere.html
|
# matplotlib.axes.Axes.cohere¶
Axes.cohere(x, y, NFFT=256, Fs=2, Fc=0, detrend=<function detrend_none>, window=<function window_hanning>, noverlap=0, pad_to=None, sides='default', scale_by_freq=None, *, data=None, **kwargs)[source]
Plot the coherence between x and y.
Plot the coherence between x and y. Coherence is the normalized cross spectral density:
$C_{xy} = \frac{|P_{xy}|^2}{P_{xx}P_{yy}}$
Parameters:
Fs : scalar
The sampling frequency (samples per time unit). It is used to calculate the Fourier frequencies, freqs, in cycles per time unit. The default value is 2.
window : callable or ndarray
A function or a vector of length NFFT. To create window vectors see window_hanning(), window_none(), numpy.blackman(), numpy.hamming(), numpy.bartlett(), scipy.signal(), scipy.signal.get_window(), etc. The default is window_hanning(). If a function is passed as the argument, it must take a data segment as an argument and return the windowed version of the segment.
sides : {'default', 'onesided', 'twosided'}
Specifies which sides of the spectrum to return. Default gives the default behavior, which returns one-sided for real data and both for complex data. 'onesided' forces the return of a one-sided spectrum, while 'twosided' forces two-sided.
The number of points to which the data segment is padded when performing the FFT. This can be different from NFFT, which specifies the number of data points used. While not increasing the actual resolution of the spectrum (the minimum distance between resolvable peaks), this can give more points in the plot, allowing for more detail. This corresponds to the n parameter in the call to fft(). The default is None, which sets pad_to equal to NFFT
NFFT : int
The number of data points used in each block for the FFT. A power 2 is most efficient. The default value is 256. This should NOT be used to get zero padding, or the scaling of the result will be incorrect. Use pad_to for this instead.
detrend : {'default', 'constant', 'mean', 'linear', 'none'} or callable
The function applied to each segment before fft-ing, designed to remove the mean or linear trend. Unlike in MATLAB, where the detrend parameter is a vector, in matplotlib is it a function. The mlab module defines detrend_none(), detrend_mean(), and detrend_linear(), but you can use a custom function as well. You can also use a string to choose one of the functions. 'default', 'constant', and 'mean' call detrend_mean(). 'linear' calls detrend_linear(). 'none' calls detrend_none().
scale_by_freq : bool, optional
Specifies whether the resulting density values should be scaled by the scaling frequency, which gives density in units of Hz^-1. This allows for integration over the returned frequency values. The default is True for MATLAB compatibility.
noverlap : int
The number of points of overlap between blocks. The default value is 0 (no overlap).
Fc : int
The center frequency of x (defaults to 0), which offsets the x extents of the plot to reflect the frequency range used when a signal is acquired and then filtered and downsampled to baseband.
Returns:
Cxy : 1-D array
The coherence vector.
freqs : 1-D array
The frequencies for the elements in Cxy.
Other Parameters:
**kwargs :
Keyword arguments control the Line2D properties:
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha float
animated bool
antialiased bool
clip_box Bbox
clip_on bool
clip_path [(Path, Transform) | Patch | None]
color color
contains callable
dash_capstyle {'butt', 'round', 'projecting'}
dash_joinstyle {'miter', 'round', 'bevel'}
dashes sequence of floats (on/off ink in points) or (None, None)
drawstyle {'default', 'steps', 'steps-pre', 'steps-mid', 'steps-post'}
figure Figure
fillstyle {'full', 'left', 'right', 'bottom', 'top', 'none'}
gid str
in_layout bool
label object
linestyle {'-', '--', '-.', ':', '', (offset, on-off-seq), ...}
linewidth float
marker unknown
markeredgecolor color
markeredgewidth float
markerfacecolor color
markerfacecoloralt color
markersize float
markevery unknown
path_effects AbstractPathEffect
picker float or callable[[Artist, Event], Tuple[bool, dict]]
pickradius float
rasterized bool or None
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
solid_capstyle {'butt', 'round', 'projecting'}
solid_joinstyle {'miter', 'round', 'bevel'}
transform matplotlib.transforms.Transform
url str
visible bool
xdata 1D array
ydata 1D array
zorder float
References
Bendat & Piersol -- Random Data: Analysis and Measurement Procedures, John Wiley & Sons (1986)
Note
In addition to the above described arguments, this function can take a data keyword argument. If such a data argument is given, the following arguments are replaced by data[<arg>]:
• All arguments with the following names: 'x', 'y'.
Objects passed as data must support item access (data[<arg>]) and membership test (<arg> in data).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38429147005081177, "perplexity": 9406.282414834619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057622.15/warc/CC-MAIN-20210925112158-20210925142158-00672.warc.gz"}
|
https://www.physicsforums.com/threads/spray-from-bike-wheels.894192/
|
# B Spray from bike wheels
1. Nov 20, 2016
### Simon Lorimer
Do different sized bicycle wheel spray differently?
I was asked this question recently and have got myself confused with an answer. To expand on the question a little, when riding through mud at the same speed, which would spray the mud further, a 26 inch wheel or a 29 inch wheel? My thought are that, at the same speed, the smaller wheel will send the mud further because it will have turned through a greater angle before becoming unstuck from the tire, and so be projected at a greater angle. Does this seem reasonable? Are there other factors which should be taken into account? Does the mud leave each wheel at the same speed?
Hope my question is clear! Thanks in advance
Simon
2. Nov 20, 2016
### Simon Bridge
Welcome to PF;
The basic pattern will be much the same for wheels that differ only by diameter.
The amount of mud flung will depend on the rate that mud is picked up by the wheel.
Larger diameter wheels will pick up more mud (because more of the rim is in the mud at once).
Smaller wheels have higher angular speed for the same bike speed... rim speed is the same as the bike speed.
I would expect that the mud will unstick a bit like how water drips - so, same time for same sort of mud - so small wheels turn by bigger angle.
However, too large an angle and the range gets smaller ... the sweet spot is around 45deg. So I'd say that sometimes the bigger wheel flings mud farther - depends on the mud.
However - this is something that can be easily tested.
Sounds like a good project for a science class or a science fair right?
3. Nov 20, 2016
### Cutter Ketch
ok, there will be a lot of caveats about tread patterns and such. Let's assume everything is as similar as possible except for the tire diameter.
The tangential velocity is the same as the bicycle speed and so the same in both cases. The centripetal force required to keep the mud on the tire is
m v^2 / r
So it requires only 80% as much force to hold the mud on the 29" wheel. So, it holds on longer. But, of course it takes 12% longer to get to the same angle past vertical with the larger tire.
That's where I got stuck. What determines how long the mud hangs on? The best I could do was the analogy of a water drop forming at a faucet. The rate of formation is directly proportional to the force (usually gravity). So that would suggest the mud will hang on 25% longer on the large wheel. The lower force beats the slower angular rate and the mud releases at a higher angle on the larger tire.
That last bit is a stretch, but I think the actual dynamics are probably pretty dense.
4. Nov 21, 2016
### Simon Lorimer
Thanks - I hadn't thought about the larger wheel picking up more mud - an interesting extra variable!
In the end I think you are right that trials would be the way to go. This could get messy.
Simon
5. Nov 21, 2016
### Simon Lorimer
Thanks for the reply. The water drop analogy is useful. Does the 25% longer for the large tire come from the lower centripetal force on the tire? It looks like it is necessary to make some assumptions about the behavior of the stuff on the tire
Simon
6. Nov 21, 2016
### Cutter Ketch
Yes, from the lower centripetal force.
7. Nov 21, 2016
### Simon Bridge
$mv^2/r$ is the net centripetal force needed to keep the mud moving with the tyre.
If the net force holding the mud to the tyre is less than this, then it will not follow the tyre around.
8. Nov 21, 2016
### Cutter Ketch
But it does! That's the maddening part. If it were a simple question enough stick or not enough stick it either wouldn't be picked up in the first place or it wouldn't have later come loose.
Clearly the mud/water went on the tire in a configuration that can hold with sufficient centripetal force, but rearranged under the force to a configuration that couldn't. The change in configuration requires time and energy and some nontrivial fluid dynamics. The only analogy I could think of was the water droplet where surface tension won't allow the water droplet to come loose until it has flowed and rearranged into a drop with more weight than the necked down surface tension can support.
9. Nov 21, 2016
### Simon Bridge
... Maybe I am not being clear: I think we agree here. technically the part of the mud sticking to the rubber stays behind - it is the internal cohesion of the mud blob we are considering here. That is what most people are thinking of when they talk about mud sticking to stuff (witness post #1: the mud flies off the wheel - yet the wheel is still muddy...) How well the mud holds to the wheel, in this sense, will change over a short time.
ie. You can pick up a lump of mud on the end of a stick and watch as it falls off. The stick is still muddy.
The thing to realise is that centripetal force is a resultant force, not an applied force.
10. Nov 21, 2016
### CWatters
In the above "V" is the velocity relative to the axel? But if our wheel is rolling isnt it the velocity relative to the contact point with the ground that matters? Isn't that the point at which mud at the top of the wheel is revolving around?
11. Nov 21, 2016
### Cutter Ketch
Oops. Yes. Dang. Back to the drawing board.
12. Nov 21, 2016
### Simon Bridge
I think this is a neat project for an investigation at school level.
I wonder if it has been formally investigated - probably.
13. Dec 1, 2016
### Simon Lorimer
This is the closest I can find. I think this suggests that the important thing is the way that the water (in this case) breaks up as it leaves the wheel as is suggested by people above, though the details of I am a little less clear on. I need to admit that I haven't read the whole thing!
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Have something to add?
Draft saved Draft deleted
Similar Discussions: Spray from bike wheels
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7024909257888794, "perplexity": 990.8210132828498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811794.67/warc/CC-MAIN-20180218062032-20180218082032-00283.warc.gz"}
|
https://se.mathworks.com/help/fininst/inflationbuild.html
|
# inflationbuild
Build inflation curve from market zero-coupon inflation swap rates
## Syntax
``InflationCurve = inflationbuild(BaseDate,BaseIndexValue,ZCISDates,ZCISRates)``
``myInflationCurve = inflationbuild(___,Name,Value)``
## Description
example
````InflationCurve = inflationbuild(BaseDate,BaseIndexValue,ZCISDates,ZCISRates)` builds an inflation curve from market zero-coupon inflation swap (ZCIS) rates. The `InflationCurve` output is an `inflationcurve` object.```
example
````myInflationCurve = inflationbuild(___,Name,Value)` specifies options using one or more name-value pair arguments in addition to any of the input argument combinations in the previous syntax. For example, ```myInflationCurve = inflationbuild(BaseDate,BaseIndexValue,ZCISDates,ZCISRates,'Seasonality',SeasonalRates)``` builds an `inflationcurve` object from market zero ZCIS dates and rates. ```
## Examples
collapse all
This example shows the workflow to build an `inflationcurve` object from zero-coupon inflation swap (ZCIS) rates using `inflationbuild`.
Define the inflation curve parameters.
```BaseDate = datetime(2020,9,20); BaseIndexValue = 100; ZCISTimes = [calyears([1 2 3 4 5 7 10 20 30])]'; ZCISRates = [0.51 0.65 0.87 0.92 0.95 1.42 1.75 2.03 2.54]'./100; ZCISDates = BaseDate + ZCISTimes; SeasonalRates = [-0.19 -0.09 -0.04 0.1 0.16 0.11 0.26 0.17 -0.07 -0.08 -0.14 -0.19]'./100;```
Use `inflationbuild` to create an `inflationcurve` object.
`myInflationCurve = inflationbuild(BaseDate,BaseIndexValue,ZCISDates,ZCISRates,'Seasonality',SeasonalRates)`
```myInflationCurve = inflationcurve with properties: Basis: 0 Dates: [10x1 datetime] InflationIndexValues: [10x1 double] ForwardInflationRates: [9x1 double] Seasonality: [12x1 double] ```
## Input Arguments
collapse all
Base date of inflation curve, specified as a scalar datetime, serial date number, date character vector, or date string.
Data Types: `double` | `char` | `string` | `datetime`
Base index value of inflation curve, specified as a scalar numeric.
Data Types: `double`
Market ZCIS maturity dates minus lag, specified as an `NINST`-by-`1` vector of datetimes, serial date numbers, cell array of date character vectors, or date string array.
Data Types: `double` | `cell` | `char` | `string` | `datetime`
Market ZCIS rates, specified as an `NINST`-by-`1` vector of decimals.
Data Types: `double`
### Name-Value Arguments
Specify optional comma-separated pairs of `Name,Value` arguments. `Name` is the argument name and `Value` is the corresponding value. `Name` must appear inside quotes. You can specify several name and value pair arguments in any order as `Name1,Value1,...,NameN,ValueN`.
Example: ```myInflationCurve = inflationbuild(BaseDate,BaseIndexValue,ZCISDates,ZCISRates,'Seasonality',SeasonalRates)```
Day count basis, specified as the comma-separated pair consisting of `'Basis'` and a scalar integer.
• 0 — actual/actual
• 1 — 30/360 (SIA)
• 2 — actual/360
• 3 — actual/365
• 4 — 30/360 (PSA)
• 5 — 30/360 (ISDA)
• 6 — 30/360 (European)
• 7 — actual/365 (Japanese)
• 8 — actual/actual (ICMA)
• 9 — actual/360 (ICMA)
• 10 — actual/365 (ICMA)
• 11 — 30/360E (ICMA)
• 12 — actual/365 (ISDA)
• 13 — BUS/252
Data Types: `double`
Seasonal adjustment rates, specified as the comma-separated pair consisting of `'Seasonality'` and a `12`-by-`1` vector in decimals for each month ordered from January to December. The rates are annualized and continuously compounded seasonal rates that are internally corrected to add to `0`.
Data Types: `double`
First month inflation index, specified as the comma-separated pair consisting of `'FirstMonthIndex'` and a positive numeric.
Data Types: `double`
## Output Arguments
collapse all
Inflation curve, returned as an `inflationcurve` object. The object has the following properties:
• `Basis`
• `Dates`
• `InflationIndexValues`
• `ForwardInflationRates`
• `Seasonality`
## Algorithms
Build an inflation curve from a series of breakeven zero-coupon inflation swap (ZCIS) rates:
`$\begin{array}{l}I\left(0,{T}_{1Y}\right)=I\left({T}_{0}\right){\left(}^{1}\\ I\left(0,{T}_{2Y}\right)=I\left({T}_{0}\right){\left(}^{1}\\ I\left(0,{T}_{3Y}\right)=I\left({T}_{0}\right){\left(}^{1}\\ ...\\ I\left(0,{T}_{i}\right)=I\left({T}_{0}\right){\left(}^{1}\end{array}$`
where
• $I\left(0,{T}_{i}\right)$ is the breakeven inflation index reference number for maturity date Ti.
• $I\left({T}_{0}\right)$ is the base inflation index value for the starting date T0.
• $b\left(0;{T}_{0},{T}_{i}\right)$ is the breakeven inflation rate for the ZCIS maturing on Ti.
The ZCIS rates typically have maturities that increase in whole number of years, so the inflation curve is built on an annual basis. From the annual basis inflation curve, the annual unadjusted (that is, not seasonally adjusted) forward inflation rates are computed as follows:
`${f}_{i}=\frac{1}{\left({T}_{i}-{T}_{i-1}\right)}\mathrm{log}\left(\frac{I\left(0,{T}_{i}\right)}{I\left(0,{T}_{i-1}\right)}\right)$`
The unadjusted forward inflation rates are used for interpolating and also for incorporating seasonality to the inflation curve.
For monthly periods that are not a whole number of years, seasonal adjustments can be made to reflect seasonal patterns of inflation within the year. These 12 monthly seasonal adjustments are annualized and they add up to zero to ensure that the cumulative seasonal adjustments are reset to zero every year.
`$\begin{array}{l}I\left(0,{T}_{i}\right)=I\left({T}_{0}\right)\mathrm{exp}\left(\underset{{T}_{0}}{\overset{{T}_{i}}{\int }}f\left(u\right)du\right)\right)\mathrm{exp}\left(\underset{{T}_{0}}{\overset{{T}_{i}}{\int }}s\left(u\right)du\right)\right)\\ I\left(0,{T}_{i}\right)=I\left(0,{T}_{i-1}\right)\mathrm{exp}\left(\left({T}_{i}-{T}_{i-1}\right)\left({f}_{i}+{s}_{i}\right)\right)\end{array}$`
where
• $I\left(0,{T}_{i}\right)$ is the breakeven inflation index reference number.
• $I\left(0,{T}_{i-1}\right)$ is the previous inflation reference number.
• fi is the annual unadjusted forward inflation rate.
• si is the annualized seasonal component for the period $\left[{T}_{i-1},{T}_{i}\right]$.
The first year seasonal adjustment may need special treatment because, typically, the breakeven inflation reference number of the first month is already known. If that is the case, the unadjusted forward inflation rate for the first year needs to be recomputed for the remaining 11 months.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8432289958000183, "perplexity": 11141.811274240323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304915.53/warc/CC-MAIN-20220126041016-20220126071016-00503.warc.gz"}
|
http://openstudy.com/updates/512ff495e4b098bb5fbd8659
|
Got Homework?
Connect with other students for help. It's a free community.
• across
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
55 members online
• 0 viewing
krissywatts Group Title what are the coordinates? for y=2x-5 x-2y=-8 one year ago one year ago Edit Question Delete Cancel Submit
• This Question is Closed
1. zepdrix Group Title
Best Response
You've already chosen the best response.
1
Let's solve this system using substitution. $\large \color{royalblue}{y=2x-5}$$\large x-2\color{royalblue}{y}=-8 \qquad \rightarrow \qquad x-2\color{royalblue}{(2x-5)}=-8$ From here, we can solve for x since that is the only variable in our equation.
• one year ago
2. zepdrix Group Title
Best Response
You've already chosen the best response.
1
Understand how I plugged that in? :o
• one year ago
3. krissywatts Group Title
Best Response
You've already chosen the best response.
0
yeah you want to solve for x correct?
• one year ago
4. zepdrix Group Title
Best Response
You've already chosen the best response.
1
yes c: If we solve for x, we can plug that x value into one of the two equations to find a corresponding y value. That will be the coordinate pair we were looking for.
• one year ago
5. krissywatts Group Title
Best Response
You've already chosen the best response.
0
i get (6,7) :) thanks for your help
• one year ago
• Attachments:
See more questions >>>
spraguer (Moderator) 5→ View Detailed Profile
23
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.996789276599884, "perplexity": 10930.89821048685}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510261771.50/warc/CC-MAIN-20140728011741-00293-ip-10-146-231-18.ec2.internal.warc.gz"}
|
http://scicomp.stackexchange.com/users/3767/noahr
|
# NoahR
less info
reputation
3
bio website location age member for 1 year, 7 months seen Dec 23 '13 at 23:12 profile views 2
# 1 Question
9 solve $xA=b$ for $x$ using LAPACK and BLAS
# 148 Reputation
+45 solve $xA=b$ for $x$ using LAPACK and BLAS
This user has not answered any questions
# 2 Tags
0 lapack 0 matrix-equations
# 6 Accounts
Stack Overflow 376 rep 418 TeX - LaTeX 215 rep 17 Computational Science 148 rep 3 Ask Ubuntu 105 rep 3 Super User 101 rep 1
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7501915693283081, "perplexity": 9777.018370311065}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663460.43/warc/CC-MAIN-20140930004103-00479-ip-10-234-18-248.ec2.internal.warc.gz"}
|
http://hepnp.ihep.ac.cn/article/doi/10.1088/1674-1137/43/2/023001
|
# The Higgs signatures at the CEPC CDR baseline
• As a Higgs factory, the CEPC (Circular Electron-Positron Collider) project aims at precision measurements of the Higgs boson properties. A baseline detector concept, APODIS (A PFA Oriented Detector for the HIggS factory), has been proposed for the CEPC CDR (Conceptual Design Report) study. We explore the Higgs signatures for this baseline design with $\nu\bar{\nu}$ Higgs events. The detector performance for reconstructing charged particles, photons and jets is quantified with $H \to \mu\mu, \gamma\gamma$ and jet final states, respectively. The resolutions of reconstructed Higgs boson mass are comparable for the different decay modes with jets in the final states. We also analyze the $H \to WW$ * and ZZ* decay modes, where a clear separation between different decay cascades is observed.
• [1] ATLAS Collaboration, Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC, Physics Letters B, 716: 1-29 (2012), arXiv: 1207.7214 [hep-ex] [2] CMS Collaboration, S. Chatrchyan et al., Observation of a new boson at a mass of 125 GeV with the CMS experiment at the LHC, Physics Letters B 716: 30-61 (2012), arXiv: 1207.7235 [hep-ex] [3] The CEPC-SPPC Study Group, CEPC-SPPC Preliminary Conceptual Design Report, http://cepc.ihep.ac.cn/preCDR/main_preCDR.pdf [4] Xinchou Lou, Circular Electron-Positron Collider - Challenges and Opportunities, presentation at IAS HEP conference, 2018, http://ias.ust.hk/program/shared_doc/2018/201801hep/conf/talks/HEP_20180123_0945_Xinchou_Lou.pdf [5] CMS Collaboration, Projected Performance of an Upgraded CMS Detector at the LHC and HL-LHC: Contribution to the Snowmass Process, CMS NOTE-13-002, arXiv: 1307.7135 [6] ATLAS Collaboration, Physics at a High-Luminosity LHC with ATLAS, ATL-PHYS-PUB-2013-007, arXiv: 1307.7292 [7] Zhen-Xing Chen et al., Cross section and Higgs mass measurement with Higgsstrahlung at the CEPC, Chinese Physics C, 41(2): 023003 (2017), arXiv: 1601.05352 [8] Manqi Ruan, CEPC Baseline detector concept, presentation at the Workshop on the CEPC-EU edition 2018, https://agenda.infn.it/getFile.py/access?contribId=23&sessionId=23&resId=0&materialId=slides&confId=14816 [9] M. Ruan, Simulation, reconstruction and software at the CEPC, presentation at CEPC workshop 2016, http://indico.ihep.ac.cn/event/5277/session/14/contribution/67/material/slides/0.pdf [10] H. Zhao et al, Particle flow oriented electromagnetic calorimeter optimization for the circular electron positron collider, Journal of Instrumentation, Volume 13, 2018, arXiv: 1712.09625 [11] D. Yu, Reconstruction of leptonic physic objects at future e+e- Higgs factory, Ph.D Thesis, Paris Saclay, 2018, http://www.theses.fr/2018SACLX018 [12] Manqi Ruan, CEPC Detector design-optimization, Reconstruction and Performance, presentation at IAS HEP conference 2018, http://ias.ust.hk/program/shared_doc/2018/201801hep/program/exp/HEP_20180119_1145_Manqi_Ruan.pdf [13] F. An et al, Monte Carlo study of particle identification at the CEPC using TPC dE/dx information, Eur. Phys. J. C, 78: 464 (2018) [14] CEPC software website, http://cepcsoft.ihep.ac.cn/ [15] W. Kilian et al, Simulating Multi-Particle Processes at LHC and ILC, Eur. Phys. J. C, 71: 1742 (2011), arXiv: 0708.4233[hep-ph] [16] T. Sj\begin{document}$\rm\ddot{o}$\end{document}strand, S. Mrenna, and P. Z. Skands, PYTHIA 6.4 Physics and Manual, JHEP, 0605: 026 (2006), arXiv:hep-ph/0603175[hep-ph] [17] P. Mora de Freitas and H. Videau, Detector simulation with MOKKA/GEANT4: Present and future, LC-TOOL-2003-010 [18] Source code of MokkaPlus, http://cepcgit.ihep.ac.cn/cepcsoft/MokkaC [19] M. Ruan and H. Videau, Arbor, a new approach of the Particle Flow Algorithm, arxiv: 1403.4784[physics.ins-det] [20] M. Thomson, Particle Flow Calorimetry and the PandoraPFA Algorithm, Nucl.Instrum. Meth. A, 611: 25, 40 (2009), arxiv: 0907.3577 [physics.ins-det] [21] C. Patrignani et al (Particle Data Group), Review of Particle Physics, Chinese Physics C, 40: 100001 (2016) [22] M.Oreglia, A study of the reactions \begin{document}$\Psi' \rightarrow \gamma\gamma\Psi$\end{document}, Ph.D. thesis, 1980 [23] J. Gaiser, Charmonium spectroscopy from radiative decays of the J/\begin{document}$\Psi$\end{document} and \begin{document}$\Psi'$\end{document}, Ph.D. thesis,1982 [24] CMS collaboration, Performance of photon reconstruction and identification with the CMS detector in proton-proton collisions at \begin{document}$\sqrt{s}$\end{document} = 8TeV, JINST, 10: P08010 (2015), arXiv: 1502.02702 [25] CMS Collaboration, Search for the standard model Higgs boson in the dimuon decay channel in pp collisions at sqrt(s)= 7 and 8 TeV, CMS-PAS-HIG-13-007, http://cds.cern.ch/record/1606831 [26] ATLAS Collaboration, Letter of Intent for the Phase-II Upgrade of the ATLAS Experiment, CERN-LHCC-2012-022, LHCC-I-023, https://cds.cern.ch/record/1502664/files/LHCC-I-023.pdf [27] CMS Collaboration, Measurements of Higgs boson properties in the diphoton decay channel in proton-proton collisions at \begin{document}$\sqrt{s}$\end{document} = 13 TeV, CMS-HIG-16-040, CERN-EP-2018-060, arXiv: 1804.02716 [28] ATLAS Collaboration, Measurements of Higgs boson properties in the diphoton decay channel with 36 fb\begin{document}$^{1}$\end{document} of pp collision data at \begin{document}$\sqrt{s}$\end{document} =13 TeV with the ATLAS detector, CERN-EP-2017-288, arXiv: 1802.04146 [29] ATLAS Collaboration, Evidence for the H \begin{document}$\to$\end{document} \begin{document}$b\bar{b}$\end{document} decay with the ATLAS detector, CERN-EP-2017-175, JHEP, 12: 024 (2017), arXiv: 1708.03299 [30] CMS collaboration (Evidence for the Higgs boson decay to a bottom quark-antiquark pair), Physics Letters B, 780: 501-532 (2018) doi: 10.1016/j.physletb.2018.02.050
Figures(12) / Tables(3)
Get Citation
Hang Zhao, Yong-Feng Zhu, Cheng-Dong Fu, Dan Yu and Man-Qi Ruan. The Higgs signatures at the CEPC CDR baseline[J]. Chinese Physics C, 2019, 43(2): 1-1. doi: 10.1088/1674-1137/43/2/023001
Hang Zhao, Yong-Feng Zhu, Cheng-Dong Fu, Dan Yu and Man-Qi Ruan. The Higgs signatures at the CEPC CDR baseline[J]. Chinese Physics C, 2019, 43(2): 1-1.
Milestone
Revised: 2018-10-29
Article Metric
Article Views(232)
Cited by(0)
Policy on re-use
To reuse of Open Access content published by CPC, for content published under the terms of the Creative Commons Attribution 3.0 license (“CC CY”), the users don’t need to request permission to copy, distribute and display the final published version of the article and to create derivative works, subject to appropriate attribution.
###### 通讯作者: 陈斌, [email protected]
• 1.
沈阳化工大学材料科学与工程学院 沈阳 110142
Title:
Email:
## The Higgs signatures at the CEPC CDR baseline
###### Corresponding author: Man-Qi Ruan, [email protected]
• 1. Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China
• 2. CAS Center for Excellence in Particle Physics, Beijing 100049, China
• 3. Collaborative Innovation Center for Particles and Interactions, Hefei 230026, China
• 4. University of Chinese Academy of Sciences, Beijing 100049, China
Abstract: As a Higgs factory, the CEPC (Circular Electron-Positron Collider) project aims at precision measurements of the Higgs boson properties. A baseline detector concept, APODIS (A PFA Oriented Detector for the HIggS factory), has been proposed for the CEPC CDR (Conceptual Design Report) study. We explore the Higgs signatures for this baseline design with $\nu\bar{\nu}$ Higgs events. The detector performance for reconstructing charged particles, photons and jets is quantified with $H \to \mu\mu, \gamma\gamma$ and jet final states, respectively. The resolutions of reconstructed Higgs boson mass are comparable for the different decay modes with jets in the final states. We also analyze the $H \to WW$ * and ZZ* decay modes, where a clear separation between different decay cascades is observed.
Reference (30)
/
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9449107050895691, "perplexity": 10182.737257886654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315865.44/warc/CC-MAIN-20190821085942-20190821111942-00528.warc.gz"}
|
http://mathhelpforum.com/calculus/182795-series-expansion.html
|
# Math Help - Series expansion?
1. ## Series expansion?
My problem is....
Use the series expansions for sin x and cos x to find the first two
terms of a series expansion for tan x
but which series do i use? Power, maclaurin?
also how do I find tan x ( i know sinx/cosx=tanx) but how do i get there using series?
many thanks
2. Use polynomial division.
3. im having some trouble using polynomial division. what is the method for having a fraction in the nominator/denominator.
when i divide straight down as seen i get
but i know the answer is
if someone could help me with the method please.
4. Originally Posted by decoy808
im having some trouble using polynomial division. what is the method for having a fraction in the nominator/denominator.
when i divide straight down as seen i get
but i know the answer is
if someone could help me with the method please.
You should learn the polynomial long division method. Please refer, Polynomial Long Division. I hope you will find it simple and illustrative.
5. Originally Posted by decoy808
My problem is....
Use the series expansions for sin x and cos x to find the first two
terms of a series expansion for tan x
but which series do i use? Power, maclaurin?
also how do I find tan x ( i know sinx/cosx=tanx) but how do i get there using series?
many thanks
$sinx=x-\frac{x^3}{3!}+\frac{x^5}{5!}-.....$
$cosx=1-\frac{x^2}{2!}+\frac{x^4}{4!}-....$
$tanx=\frac{sinx}{cosx}\Rightarrow\ sinx=cosxtanx$
$\Rightarrow\ x-\frac{x^3}{3!}+\frac{x^5}{5!}-....=\left(1-\frac{x^2}{2!}+\frac{x^4}{4!}-....\right)tanx$
Therefore the first term of tanx is x and the 2nd term involves $x^3$ as an $x^2$ would give even powers of x.
$x-\frac{x^3}{3!}+\frac{x^5}{5!}-....=\left(1-\frac{x^2}{2!}+\frac{x^4}{4!}-....\right)\left(x+\frac{x^3}{k}+...\right)$
Multiplying out and comparing terms to find k,
$x-\frac{x^3}{3!}+\frac{x^5}{5!}-...=x-\frac{x^3}{2!}+\frac{x^5}{4!}+...+\frac{x^3}{k}-\frac{x^5}{(k)2!}+\frac{x^7}{(k)4!}-...$
$\Rightarrow\frac{ x^3}{k}-\frac{x^3}{2!}=-\frac{x^3}{3!}$
$\Rightarrow\frac{2x^3-kx^3}{(k)2!}=-\frac{x^3}{3!}$
which gives k and therefore the second term in the expansion of tanx.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8262230753898621, "perplexity": 865.7133800847511}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927824.26/warc/CC-MAIN-20150521113207-00305-ip-10-180-206-219.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/advanced-algebra/220677-field-vector-space-understood-reals.html
|
# Thread: Is the field of this vector space understood to be the reals?
1. ## Is the field of this vector space understood to be the reals?
"Let V denote the set of all differentiable real-valued functions defined on the real line."
Does this automatically mean that this vector space is over the field of reals?
Why or why not?
I ask because I need to prove this is a vector space. But, if I pick some element a from F (the field), then the scalar multiplication of a and an element of V is only real-valued if a is a real. This would make this scalar multiplication not an element of V, making it not a vector space, if F were a field that contained non-real elements. So, I must assume that this V is over the field of reals in order for it to prove it is a vector space, but why am I warranted to make that claim?
Sorry if this is a dumb question, I am just starting LA independently.
Thanks
2. ## Re: Is the field of this vector space understood to be the reals?
Yes, your argument is correct. If a is a "scalar" and v is a "vector" then av must be a vector. If you multiply a "differentiable real-valued function defined on the real line" by a complex number, the result would no longer be "differentiable real-valued function defined on the real line". Now, it would be possible to have the space of "differentiable real-valued functions defined on the real line" over the field of rational numbers since the product of a rational and a real number is a real number- but that would be very unusual.
3. ## Re: Is the field of this vector space understood to be the reals?
Originally Posted by HallsofIvy
Yes, your argument is correct. If a is a "scalar" and v is a "vector" then av must be a vector. If you multiply a "differentiable real-valued function defined on the real line" by a complex number, the result would no longer be "differentiable real-valued function defined on the real line". Now, it would be possible to have the space of "differentiable real-valued functions defined on the real line" over the field of rational numbers since the product of a rational and a real number is a real number- but that would be very unusual.
Hey HoI,
So from your response, when being asked to prove that V is or is not a vector space, if the field is not mentioned, I should assume the field is one that would not make it immediately impossible for V to be a vector space.
So there is nothing about "the set of all differentiable real-valued functions defined on the real line" that inherently makes the field R.
And, this is a matter of taking it for granted that the field is an appropriate one that wouldn't trivialize the exercise.
Is that correct?
Thank you.
4. ## Re: Is the field of this vector space understood to be the reals?
Yes, that seems like a reasonable interpretation.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.963447093963623, "perplexity": 211.11455539315747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319636.73/warc/CC-MAIN-20170622161445-20170622181445-00226.warc.gz"}
|
http://bioconductor.statistik.tu-dortmund.de/packages/3.8/bioc/vignettes/rhdf5client/inst/doc/main.html
|
# HSDSSource
An object of type HSDSSource is a HDFGroup server running on a machine. The constructor requires the endpoint and server type. At present, the only valie value is hsds (for the HDF Scalable Data Service). If the type is not specified, the server will be assumed to be hsds
src.hsds <- HSDSSource('http://hsdshdflab.hdfgroup.org')
The routine listDomains is provided for inspection of the server hierarchy. This is the hierarchy that maps approximately to the directory structure of the server file system. The purpose of this routine is to assist the user in locating HDF5 files.
The user needs to know the root domain of the server. The data set’s maintainer should publish this information along with the server endpoint.
listDomains(src.hsds, '/home/jreadey')
## [1] "/home/jreadey/4DStem"
## [22] "/home/jreadey/tmp"
listDomains(src.hsds, '/home/jreadey/HDFLabTutorial')
## [1] "/home/jreadey/HDFLabTutorial/03.h5"
## [3] "/home/jreadey/HDFLabTutorial/04a.h5"
# HSDSFile
An object of class HSDSFile represents a HDF5 file. The object is constructed by providing a source and a file domain.
f0 <- HSDSFile(src.hsds, '/home/spollack/testzero.h5')
f1 <- HSDSFile(src.hsds, '/shared/bioconductor/tenx_full.h5')
The function listDatasets lists the datasets in a file.
listDatasets(f0)
## [1] "/grpA/grpAA/dsetAA1" "/grpA/grpAB/dsetX" "/grpB/grpBA/dsetX"
## [4] "/grpB/grpBB/dsetBB1" "/grpC/dsetCC"
listDatasets(f1)
## [1] "/newassay001"
# HSDSDataset
Construct a HSDSDataset object from a HSDSFile and a dataset path.
d0 <- HSDSDataset(f0, '/grpA/grpAB/dsetX')
d1 <- HSDSDataset(f1, '/newassay001')
## Data Fetch (1)
The low-level data retrieval method is getData. Its argument is a vector of slices of type character. Valid slices are : (all indices), 1:10 (indices 1 through 10 inclusive), :10 (same as 1:10), 5: (from 5 to the maximum value of the index) and 2:14:4 (from 2 to 14 inclusive in increments of 4.)
Note that the slice should be passed in R semantics: 1 signifies the first element, and the last element is included in the slice. (Internally, rhdf5client converts to Python semantics, in which the first index is 0 and the last element is excluded. But here, as everywhere in the package, all Python details should be hidden from the user.)
apply(getData(d1, c('1:4', '1:27998'), transfermode='JSON'), 1, sum)
## [1] 4046 2087 4654 3193
apply(getData(d1, c('1:4', '1:27998'), transfermode='binary'), 1, sum)
## [1] 4046 2087 4654 3193
## Data Fetch (2)
getData is generic. It can also be passed a list of vectors for the index argument, one vector in each dimension. At present, it only works if each of the vectors can be expressed as a single slice. Eventually, this functionality will be expanded to the general multi-dimensional case of multiple slices. In the general case, multiple array blocks will be fetched and bound back together into a single array.
apply(getData(d1, list(1:4, 1:27998), transfermode='JSON'), 1, sum)
## [1] 4046 2087 4654 3193
apply(getData(d1, list(1:4, 1:27998), transfermode='binary'), 1, sum)
## [1] 4046 2087 4654 3193
## Data Fetch (3)
The [ operator is provided for the two most typical cases (one-dimensional and two-dimensional numeric data.)
apply(d1[1:4, 1:27998], 1, sum)
## [1] 4046 2087 4654 3193
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20723573863506317, "perplexity": 4945.211635939889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401601278.97/warc/CC-MAIN-20200928135709-20200928165709-00320.warc.gz"}
|
https://kb.osu.edu/handle/1811/55569?show=full
|
dc.creator Yang, J. en_US dc.creator Zhang, L. en_US dc.creator Wang, L. en_US dc.creator Zhong, D. en_US dc.date.accessioned 2013-07-16T21:46:35Z dc.date.available 2013-07-16T21:46:35Z dc.date.issued 2013 en_US dc.identifier 2013-FE-08 en_US dc.identifier.uri http://hdl.handle.net/1811/55569 dc.description Author Institution: Department of Physics, The Ohio State University, Columbus, OH 43210; Department of Chemistry, Columbia University, New York, NY 10027; Department of Physics, Department of Chemistry and Biochemistry, and Programs of Biophysics, Chemical Physics, and Biochemistry, The Ohio State University, Columbus, OH 43210 en_US dc.description.abstract Water motion probed by intrinsic tryptophan shows the significant slowdown around protein surfaces but it is unknown how the ultrafast internal conversion of two nearly degenerate states of Trp ($^1$L$_a$ and $^1$L$_b$) affects the initial hydration in proteins. Here, we used a mini-protein with ten different tryptophan locations one at a time through site-directed mutagenesis and extensively characterized the conversion dynamics of the two states. We observed all the conversion time scales in 40-80 fs by measurement of their anisotropy dynamics. This result is significant and shows no noticeable effect on the initial observed hydration dynamics and unambiguously validates the slowdown of hydration layer dynamics as shown here again in two mutant proteins. en_US dc.language.iso en en_US dc.publisher Ohio State University en_US dc.title FEMTOSECOND CONICAL INTERSECTION DYNAMICS OF TRYPTOPHAN IN PROTEINS AND VALIDATION OF SLOWDOWN OF HYDRATION LAYER DYNAMICS en_US dc.type Article en_US dc.type Image en_US dc.type Presentation en_US
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8649572730064392, "perplexity": 12476.14770002815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152144.81/warc/CC-MAIN-20210726152107-20210726182107-00186.warc.gz"}
|
http://math.stackexchange.com/questions/34365/show-that-x2-3y2-n-either-has-no-solutions-or-infinitely-many-solutions/34371
|
# Show that $x^2 - 3y^2 = n$ either has no solutions or infinitely many solutions
I have a question that I have problem with in number theory about Diophantine,and Pell's equations. Any help is appreciated!
We suppose $n$ is a fixed non-zero integer, and suppose that $x^2_0 - 3 y^2_0 = n$, where $x_0$ and $y_0$ are bigger than or equal to zero. Let $x_1 = 2 x_0 + 3 y_0$ and $y_1 = x_0 + 2 y_0$. We need to show that we have $x^2_1 - 3 y^2_1 = n$, with $x_1>x_0$, and $y_1>y_0$. Also, we need to show then that given $n$, the equation $x^2 - 3 y^2 = n$ has either no solutions or infinitely many solutions. Thank you very much!
-
Just out of curiosity: How long did you spend trying to do this problem on your own before posting? – Arturo Magidin Apr 21 '11 at 20:02
You should substitute $x=2x_0+3y_0$, $y=x_0+2y_0$ in the expression $x^2-3y^2$, simplify, see what happens. – André Nicolas Apr 21 '11 at 20:02
@Arturo: I did try but I think I did mistake somewhere because I couldn't simplify the equation. Thanks! – kira Apr 21 '11 at 20:11
@user6312:I did the same thing but got a problem. I'll try later again. Thanks! – kira Apr 21 '11 at 20:11
Next time, please say what you tried and why things are not working out. Here, you could easily have posted your attempt, and people could have pointed out if (or where) there was a mistake. You'd learn a lot more that way. – Arturo Magidin Apr 21 '11 at 20:12
The fact that if $x_1=2x_0+3y_0$ then $x_1\gt x_0$ is immediate: you cannot have both $x_0$ and $y_0$ zero; likewise with $y_1$.
That $x_1^2+3y_1^2$ is also equal to $n$ if you assume that $x_0^2 - 3y_0^2=n$ should follow by simply plugging in the definitions of $x_1$ and $y_1$ (in terms of $x_0$ and $y_0$), and chugging.
Finally, what you have just done is show that if you have one solution, you can come up with another solution. Do you see how this implies the final thing you "need to show"?
-
How to show it has no solutions or infinetely many solutions? Thanks! – kira Apr 24 '11 at 23:15
@kira: The entire process tells you how to go from one solution to another. Keep going. If you have at least one solution, how many different solutions will you have? – Arturo Magidin Apr 25 '11 at 2:00
If we have a solution, then we can find another one with $x_1>x_0$, and $y_1>y_0$ in the same quadrant. Thus, we have infinitely many. Is anything to be added since we can find a new solution every time with bigger $x_i$ and $y_i$? – user9636 Apr 25 '11 at 4:11
Thank you! – kira Apr 25 '11 at 4:20
HINT $\:$ Put $\rm\: z = x+\sqrt{3}\ y\:,\:$ norm $\rm\:N(z)\: = z\:z' = x^2 - 3\ y^2\:.\:$ Then $\rm u = 2 + \sqrt{3}\ \Rightarrow\ N(u) = u\:u' = 1\:$ so $\rm\ N(u\:z)\ =\ (u\:z)\:(u\:z)' =\ u\:u'\:z\:z'\ =\ z\:z'\:,\:$ where $\rm\ u\:z\ =\ 2\:x+3\:y + (x+2\:y)\ \sqrt{3}\:.\:$ Therefore the composition law (symmetry) $\rm\ z\to u\:z\$ on the solution space $\rm\:\{z\ :\ N(z) = n\}$ arises simply by multiplying by an element of $\rm\:u\:$ of norm $1\:,\:$ using the multiplicativity of the norm: $$\rm\ N(u) = 1\ \ \Rightarrow\ \ N(u\:z)\ =\ N(u)\:N(z)\ =\ N(z) = n$$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9690213203430176, "perplexity": 220.22139670215276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644068749.35/warc/CC-MAIN-20150827025428-00035-ip-10-171-96-226.ec2.internal.warc.gz"}
|
http://hussieandthemachine.tumblr.com/
|
agirlnamedfreddy said: You're such a shit, you know that?
Yep :3
Today’s Gender of the day is: The Quadratic Formula
Today’s Gender of the day is: A Formula for Calculating Nothing in Particular
This isn’t the quadratic formula, it should be b^2-4ac.
That is, $x=\frac{-b\plusminus \sqrt{b^2-4ac}}{2a}$
The windfalls of this education. They are overwhelming.
(via agirlnamedfreddy)
Tags: science maths
"Try a thing you haven’t done three times. Once, to get over the fear of doing it. Twice, to learn how to do it. And a third time, to figure out whether you like it or not."
Virgil Garnett Thomson (via feellng)
Tags: Climb
I have a physics textbook from before the electron was discovered and they just sound so frustrated it’s hilarious
"We have no fucking idea what is happening but we know SOMETHING is happening."
(via agirlnamedfreddy)
Anonymous said: My compliment was sincere, but I knew you would question it - you question a lot of things, and I can relate to that. You're right that we must agree in the definitions of words such as "belief" but this ask function is quite limited in the amount of available space to write. One of the more recent posts you mentioned that you believed in the abilities of someone who claimed to be a psychic or clairvoyant, I think it was? I'm not aware that the laws of physics allow for such ability to occur.
Well sweetheart, not to be rude but that means to me that you just don’t know physics very well. Not from a complete perspective view of the science as a whole anyway. Is it safe for me to assume that you are math brained concerning pure physics? See, my math centers are practically non functioning. So I perceive and express myself more from my language centers. More like the way Feynman did with images and wordy descriptions rather than a hard cast of numbers and equations.
These two very different approaches to physics and the way they’re expressed and understood and communicated usually tend to not understand or typically believe one another’s theories. This has always been a problem in physics.
And what do physics have to do with psychology, neurology, neurobiology, and neuroimaging? Well they have a lot to do with them actually if you stop to consider the correlations between things like quantum consciousness, and spooky entanglement, MWI, Spherical Harmonics, sound, light, and gravity … and you might be able to better arrive at such conclusions if of course anyone understood light, gravity, and the way we perceive the world to a better extent than we do at this time through the feeble capabilities of the human eye and the limited ways we have to convey said concepts outside of equations and math.
Math is a language too, and not everyone is fluent.
They have just recently developed a machine that can look at your thoughts in images did you know that? Like that episode of Dr. House a while back. That machine is now a reality. Science can recreate your thoughts into pictures as well as reconstruct your dreams with 50% accuracy.
What I said concerning that post was (adamantly) That is LOATHE psychics. I hate them with a passion, most of them I think are fakes, charlatans, and liars who use and abuse people for money worse than preachers do.
But Theresa Caputo I do believe for one reason more than any other. Because she doesn’t always perform for profit. And I think that makes all the difference. She’s not crazy, she’s level headed, and what other motivation would there be to “help” people if she doesn’t always ask for payment? I don’t know of another medium who does that.
Now.. I didn’t say however that I agree with her take on what it is that she accomplishes when she does readings. I don’t agree with her particular take on what it is that she believes she is in communication with.
What she does actually has everything to do with physics. And her brain is wired that way specifically to where she can tap into the subconscious of another entity, connect with them, attach and basically plug into their brains and communicate through their consciousness.
That’s rare, that’s HUGE, and that has everything to do with human biology and the aspect of the physical sciences that are connected indirectly and directly to physics especially quantum physics.
I think that from what I’ve observed of her talent, that she is “special” I do believe that she see’s things, hears things, and that she can actually read people. But I don’t know if what she is tapping into is “spirit” That’s a word that belongs in the dictionary that we’ve read from since the dark ages. I think she believes it’s that. But that is NOT what I believe.
But truth be told, whether or not that has anything to do with physics isn’t the point. But the fact of the matter is, it does.
I understand things differently from most people, because I am a synesthete. I have a unique brain that is hardwired very different from most other peoples brains. My signals are sort of hotwired, and for whatever the reason was for that anomaly in my brain, whether it be brain damage, or just something I was born with? I know that I perceive the world very different from the standard expected norm. So I know that the human brain functions differently in some than it does in others. Most people wouldn’t understand that and so when they do observe experiences, they automatically attempt to apply those observations to mysticism.
When that’s not what mysticism is at all. That’s a totally different science all together. That’s what I personally like to call the unknown science. In that there will come a time when the human brain does collectively catch up, and evolve to where all people will possess these abilities. I don’t believe they are unique to me, nor to people like Theresa. I think we’re just the first in a long line of the evolution of the human consciousness.
That’s what I believe.
I usually enjoy your blog, but frankly anyone who thinks Feynman was more inclined to describing physics with language and pictures than with numbers has their head up their ass and no rigorous knowledge of the field.
I’m going to ignore the rest of the nonsense in this post but franky, as a physicist, it’s insulting to pretend that “mathy understanding” and “qualitative understanding” are two different camps with conflicting opinions. If your qualitative “understanding” is mathematically ungrounded or if you don’t feel the it in your bones while looking at math, you simply aren’t doing physics.
npr:
"Art In A Jar 2: Details, Details"
Can you find the masterpiece in the mass of pieces?
Okay but consider this: mermaids in space
Space mermaids? As in: alien mermaids that live in the vacuum of space and swim between the stars? A setting that uses the analogy of deep space as the open ocean but keeps all…
MY SCIENCE TEACHER CAUGHT THE TABLE ON FIRE AND HES JUST STARING AT IT
I LOVE SCIENCE TEACHERS
I’M SORRY BUT HOW BADLY DID HE FUCK UP READING HIS CALIPER?
(Source: ghoulbutt, via maplehoofs)
ask anyone what 2011 was like on tumblr and 9/10 times they’ll link you to this video
(Source: emoprincess2k12, via checkhunter)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2948490381240845, "perplexity": 1287.4070852119835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444312.8/warc/CC-MAIN-20141017005724-00326-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/final-concentrations-unknown-volume.624107/
|
# Final concentrations unknown volume
1. ### keen55
1
Hi
I have a known volume of water with a known concentration of calcium.
I want to bring that volume up to a new (slightly) higher volume with a new (slightly) higher concentration. To do this I am adding a solution with a known concentration of Ca but cannot remember how to calculate the volume (of the second solution) I need to get to the final concentration. I cannot adjust the concentration of the second substance, I can only adjust the volume.
thanks
2. ### mycotheology
91
Use the equation: C1V1 = C2V2
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9183661937713623, "perplexity": 899.4204477279056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987174.71/warc/CC-MAIN-20150728002307-00000-ip-10-236-191-2.ec2.internal.warc.gz"}
|
http://umj.imath.kiev.ua/article/?lang=en&article=5329
|
2018
Том 70
№ 5
On the problem of approximation of functions by algebraic polynomials with regard for the location of a point on a segment
Motornyi V. P.
Abstract
We obtain a correction of an estimate of the approximation of functions from the class W r H ω (here, ω(t) is a convex modulus of continuity such that tω '(t) does not decrease) by algebraic polynomials with regard for the location of a point on an interval.
English version (Springer): Ukrainian Mathematical Journal 60 (2008), no. 8, pp 1270-1284.
Citation Example: Motornyi V. P. On the problem of approximation of functions by algebraic polynomials with regard for the location of a point on a segment // Ukr. Mat. Zh. - 2008. - 60, № 8. - pp. 1087–1098.
Full text
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9260474443435669, "perplexity": 694.4852845676968}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267859766.6/warc/CC-MAIN-20180618105733-20180618125733-00082.warc.gz"}
|
https://k12.libretexts.org/Bookshelves/Science_and_Technology/Biology/13%3A_Human_Biology/13.06%3A_Central_Nervous_System
|
13.6: Central Nervous System
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$
The human brain. The "control center." What does it control?
Practically everything. From breathing and heartbeat to reasoning, memory, and language. And it is the main part of the central nervous system.
Central Nervous System
The nervous system has two main divisions: the central nervous system and the peripheral nervous system (see Figure below). The central nervous system (CNS) includes the brain and spinal cord (see Figure below). You can see an overview of the central nervous system at this link: vimeo.com/2024719.
The Brain
The brain is the most complex organ of the human body and the control center of the nervous system. It contains an astonishing 100 billion neurons! The brain controls such mental processes as reasoning, imagination, memory, and language. It also interprets information from the senses. In addition, it controls basic physical processes such as breathing and heartbeat.
The brain has three major parts: the cerebrum, cerebellum, and brain stem. These parts are shown in Figure below and described in this section.
• The cerebrum is the largest part of the brain. It controls conscious functions such as reasoning, language, sight, touch, and hearing. It is divided into two hemispheres, or halves. The hemispheres are very similar but not identical to one another. They are connected by a thick bundle of axons deep within the brain. Each hemisphere is further divided into the four lobes shown in Figure below.
• The cerebellum is just below the cerebrum. It coordinates body movements. Many nerve pathways link the cerebellum with motor neurons throughout the body.
• The brain stem is the lowest part of the brain. It connects the rest of the brain with the spinal cord and passes nerve impulses between the brain and spinal cord. It also controls unconscious functions such as heart rate and breathing.
Spinal Cord
The spinal cord is a thin, tubular bundle of nervous tissue that extends from the brainstem and continues down the center of the back to the pelvis. It is protected by the vertebrae, which encase it. The spinal cord serves as an information superhighway, passing messages from the body to the brain and from the brain to the body.
Humanoid Robot Brains
The smartest people in the world have spent millions of dollars on developing high-tech robots. But even though technology has come a long way, these humanoid robots are nowhere close to having the "brain" and motor control of a human. Why is that? Learn about the motor control processes in the human brain, and how cutting-edge research is trying to implement it in robots below.
Science Friday: Face Time: How quickly do you judge a face?
How fast do you judge somebody by their face? In this video by Science Friday, Dr. Jon Freeman discusses how the brain quickly creates character assessments of people and the effects these assessments may have.
Summary
• The central nervous includes the brain and spinal cord.
• The brain is the control center of the nervous system. It controls virtually all mental and physical processes.
• The spinal cord is a long, thin bundle of nervous tissue that passes messages from the body to the brain and from the brain to the body.
Review
1. Name the organs of the central nervous system.
2. Which part of the brain controls conscious functions such as reasoning?
3. What are the roles of the brain stem?
4. Sam’s dad was in a car accident in which his neck was broken. He survived the injury but is now paralyzed from the neck down. Explain why.
Image Reference Attributions [Figure 1] Credit: Laura Guerin Source: CK-12 Foundation License: CC BY-NC 3.0 [Figure 2] Credit: Hana Zavadska Source: CK-12 Foundation License: CC BY-NC 3.0 [Figure 3] Credit: User:Grm wnr/Wikimedia Commons, modified by Sam McCabe;Laura Guerin Source: commons.wikimedia.org/wiki/File:Central_nervous_system.svg ; CK-12 Foundation License: Public Domain; CC BY-NC 3.0 [Figure 4] Credit: Laura Guerin Source: CK-12 Foundation License: CC BY-NC 3.0 [Figure 5] Credit: Laura Guerin;Flickr:FaceMePLS Source: CK-12 Foundation ; http://www.flickr.com/photos/faceme/5448327286/ License: CC BY-NC 3.0; CC BY 2.0
13.6: Central Nervous System is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3186706602573395, "perplexity": 3603.3087990558643}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711336.41/warc/CC-MAIN-20221208114402-20221208144402-00025.warc.gz"}
|
https://www.physicsforums.com/threads/measuring-gravitational-redshift-due-to-galaxies-without-gr.813046/
|
# Measuring Gravitational Redshift due to Galaxies without GR
1. May 9, 2015
### quantumfoam
Hi guys.
How do astrophysicists measure the redshift of electromagnetic waves from galaxies due to gravity without the use of General Relativity? If I can be more specific, how do astrophysicists know that the gravitational redshift of light emitted from some part of a galaxy or galaxy cluster is small relative to kinematic redshifts (if these light emitting components of a galaxy or galaxy cluster are moving away from us of course) without using General Relativity to prove that such a redshift is small? For example, when creating the rotation curves for galaxies, it is often claimed that the redshifts measured are significantly due to kinematic effects rather than due to gravitational redshifts. How do astrophysicists know this without using General Relativity to show that this is true?
2. May 10, 2015
### Orodruin
Staff Emeritus
Well, to start with, it does not matter for the rotational curves as you are looking at differences of redshift rather than absolute values.
You can also estimate the amount of redshift by estimating the mass.
3. May 10, 2015
### quantumfoam
I'm sorry. I don't think I understand how it doesn't matter for rotational curves. Could you please explain it a little more?
4. May 10, 2015
### Bandersnatch
When you measure rotation, you look at red and blue-shifted lines in the galactic (or stellar) spectrum spread symmetrically around the expected line position. It'll produce a symmetrical spread of certain width, corresponding to the difference in velocities between the limb rotating towards you (blue-shifted) and the one rotating away (red-shifted). It doesn't matter where exactly the whole thing is in the spectrum (i.e., how shifted by gravity), since it's the width that gives you rotation data, and it doesn't change.
5. May 10, 2015
### quantumfoam
Thank you very much!
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9290589690208435, "perplexity": 869.4372765382378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509170.2/warc/CC-MAIN-20181015100606-20181015122106-00193.warc.gz"}
|
http://mathhelpforum.com/differential-geometry/181768-vector-space-null-space.html
|
# Thread: Vector space and null space
1. ## Vector space and null space
Please can you guys help me to solve the following questions
Q. Let Z be a proper subspace of an n-dimensional vector space X, and let x_0 \in X-Z. Show tha there is an linear functional f on X such that f(x_0)=1 and f(x)=0 for all x\in Z
2. Originally Posted by kinkong
Please can you guys help me to solve the following questions
Q. Let Z be a proper subspace of an n-dimensional vector space X, and let x_0 \in X-Z. Show tha there is an linear functional f on X such that f(x_0)=1 and f(x)=0 for all x\in Z
You may be overthinking this. To specify any linear transformations between two vector spaces one needs only specify the action of the map on a basis. So, let $\{x_1,\cdots,x_m\}$ be a basis for $Z$ now since $x_0$ is independent of this set you know that $\{x_0,x_1,\cdots,x_m\}$ can be extended to some basis $\{x_0,x_1,\cdots,x_m,x_{m+1},\cdots,x_{n-1}\}$ for $X$ and define your linear functional however you want, perhaps $\varphi:X\to F$ given by $\varphi(x_k)=\delta_{k,0}$ (the Kronecker delta function) and extend by linearity.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9990462064743042, "perplexity": 228.57489343972463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806720.32/warc/CC-MAIN-20171123031247-20171123051247-00006.warc.gz"}
|
https://www.radiologyweb.org/Physics_Data/Physics/110_Physics.htm
|
MRI 2-LARMOR PRECESSION
#### Physics
MRI 2-LARMOR PRECESSION
1. In addition to energy separation of the parallel and antiparallel spin states, the protons also experience a torque in a perpendicular direction from the applied magnetic field that causes precession.
2. The precession occurs at an angular frequency (number of rotations/sec about an axis of rotation) that is proportional to the magnetic field strength B 0
3. The Larmor equation describes the dependence between the magnetic field B 0 , and the angular precessional frequency, $\LARGE&space;\omega&space;_{0}=\gamma&space;B_{0}$ (where γ = gyromagnetic ratio unique to each element).
##### REFERENCES
1. Bushberg, J. T. (2012). The essential physics of medical imaging. Philadelphia: Wolters Kluwer Health/Lippincott Williams & Wilkins. Find it at Amazon
2. Heggie, J. C., Liddell, N. A., & Maher, K. P. (1997). Applied imaging technology. Melbourne: St. Vincents Hospital.
Ⓒ A. Manickam 2018
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8349982500076294, "perplexity": 6349.6883203414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573145.32/warc/CC-MAIN-20220818003501-20220818033501-00452.warc.gz"}
|
https://rhumbl.com/docs/guide/
|
## Tutorial
Graph Studio is a visual editor for network data. Using Graph Studio, you can create entities, connect them by mapping relationships, edit styles and visualization layout — and see it all in real-time. With Graph Studio, you can import an Excel spreadsheet to generate a visualization, and export your visualization back into a spreadsheet.
There are two ways of making network visualizations in Graph Studio: 1) editing within Graph Studio itself using our visual editor or 2) editing in an Excel spreadsheet, and then importing the spreadsheet to Rhumbl.
For beginners, we recommend first working through the next section of creating a map with Graph Studio, then next continuing with the example of importing a spreadsheet.
### Visual editing
Let's go through creating a first map in Graph Studio to see the basics of editing within Graph Studio.
Start by clicking on on your home dashboard and select the first option of Make a fresh map. There are two ways of making maps: 1) stamping out a map from an example template like we're doing now, or 2) uploading an Excel spreadsheet.
In the next screen, you'll see the option to choose a data template:
These examples are only here to get you started; the data is fully-editable. In this tutorial we'll select the first example — Education. The next screen allows you to select a color scheme. This color scheme is fully-editable as well, so for now, just choose a color scheme that you find most appealing. We'll go with the “Light” scheme.
#### Making entities
One of the fundamental ideas of the network visualization is the idea of entities. An entity is a thing of importance in your data. Entities can be visualized as nodes on the map, or they can be grouping entities. Group-type entities are not visualized on the map — they serve only as a “container” to hold other entities. Read more on the fundamentals of entities and entity types.
Notice that in the Education example, we have 3 entity types: uber group, department and subject. You can rename these entity types, but you cannot change whether they are a node-type or group-type entity.
Let's create a new entity type named teacher, and make that a node-type entity:
Now, let's create a new entity of type teacher. Click on the button to pop open the panel, click , and create an entity named spongebob squarepants. Specify the entity type as teacher (the entity type we just created).
Hit . If you look to your map now, there is a red node, and if you hover over it, you should see spongebob squarepants.
The reason why it's red is because it's easy to tell that this is a newly-created entity. This is very useful for when you're working on maps with more than one person, and you want to see what's changed.
Important! At this point, the changes you've made are not permanently saved. If you refresh the page or close the window, those changes are lost. To permanently save them, click the button on the menu panel:
#### Making relationships
Thus far, you've created one entity type (teacher), and created a node that is of that type. Another fundamental concept in a graph visualization is the idea of relationships between entities. Let's make some connections to and from spongebob.
First, look at the Relationships summary. There are already three existing relationship types. The first one is special and it is called HAS_PARENT_OF — it is a parent-type relationship: parent-type relationships describe how entities are grouped. Read more on the fundamentals of relationships and relationship types.
We'll add a parent relationship going from spongebob to Math. Click on Add relationship:
Click Save. On first glance, nothing seems to have happened: the map still looks the same! That's because you need to tell Rhumbl to regenerate the layout for you by clicking on the button. Wait for it...and the new node is now grouped under its new parent Math:
Now let's add a new relationship type. Create a new relationship type and call it teaches. This relationship is a directed relationship because the direction matters — a person can teach a subject, but a subject cannot teach a person.
Now let's add a new relationship type. Create a new relationship type and call it teaches. This relationship is a directed relationship because the direction matters — a person can teach a subject, but a subject cannot teach a person.
We'll now create another relationship, pointing from spongebob to Differential Equations, of type teaches. After creating this relationship, you should see the new relationship in red:
And as before, to see how this new edge may affect the layout, hit to regenerate the visualization layout.
That's it! That's how you create a map using the visual editing method. Next, we'll talk about creating a map using the spreadsheet editing method.
The spreadsheet must be in one of the following formats: Adjacency List or Adjacency Matrix. Many users have reported finding the Adjacency List a bit more intuitive, so we'll start with that one for this tutorial.
In this quickstart, we'll walk through an example using the Adjacency List template. This example describes how classes organized into different departments in a school are related.
The Adjacency List format lists each entity in each row, and columns of the entity across in columns. Let's take a look at this first row in the spreadsheet:
id type name SCH School type
This first row describes an entity whose id is SCH, name is School whose type is School. These three properties are required. Read for more detail on spreadsheet columns.
Step 2 Then, upload your data as prompted. You will see the following parsed result:
This tells you that Rhumbl has understood from the spreadsheet that there are 7 entities total, and amongst those 7 entities, there are 11 relationships. What are those relationships?
There are two corequisite relationships, parsed from these cells in our spreadsheet:
This says that the mech has a corequisite of calc-1 and em has a corequisite of calc-2.
We also have 6 prerequisite relationships:
We also have a bunch of belongs to relationships. They say that that each entity belongs to one group. The only entity that doesn't have a parent group is the root entity, which in our case is the School entity. This row says that Department X is under the School.
Now that we've gone over what these mean, let's move on. Hit the button to choose styles.
Rhumbl provides you with a few different color schemes. You can change specific colors later, but for now, we'll just pick the Dark scheme.
There you go! Those are the two main ways of making a graph visualization: either via the visual editor or through editing a spreadsheet. We suggest jumping in to make a map, or checking out a few spreadsheet examples.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15929286181926727, "perplexity": 1588.4608613167954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738555.33/warc/CC-MAIN-20200809132747-20200809162747-00443.warc.gz"}
|
http://www.gradesaver.com/textbooks/science/physics/conceptual-physics-12th-edition/chapter-34-think-and-solve-page-654/32
|
## Conceptual Physics (12th Edition)
Raise the temperature of $4.0 \times 10^{8}$ kgs of water by 50 Celsius degrees.
Calculate the energy released by the explosion. $$\frac{(20 kilotons)(4.2 \times 10^{12} J/kiloton)}{4,184 J/kilocalorie} = 2.0 \times 10^{10} kilocalories$$ This is enough energy to raise the temperature of $2.0 \times 10^{10}$ kg of water by one Celsius degree. Equivalently, this energy could raise the temperature of 1/50 of that amount of water, $4.0 \times 10^{8}$ kgs, by 50 Celsius degrees. As stated in the question, this is about half a million tons.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7370926141738892, "perplexity": 519.6939024897564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105455.37/warc/CC-MAIN-20170819143637-20170819163637-00309.warc.gz"}
|
http://alfan-farizki.blogspot.jp/2015/07/pymc-tutorial-bayesian-parameter.html
|
## Minggu, 26 Juli 2015
### PyMC Tutorial #1: Bayesian Parameter Estimation for Bernoulli Distribution
Suppose we have a Coin which consists of two sides, namely Head (H) and Tail (T). All of you might know that we can model a toss of a Coin using Bernoulli distribution, which takes the value of $$1$$ (if H appears) with probability $$\theta$$ and $$0$$ (if T appears) with probability $$1 - \theta$$. In this case, $$\theta$$ is also called as the parameter of a Bernoulli distribution since knowing the value of $$\theta$$ is sufficient for determining $$P(H)$$ and $$P(T)$$. For a fair Coin, $$\theta$$ is set to $$0.5$$, which means that we have equal degree of belief for both sides.
This time, we aim at estimating the parameter $$\theta$$ of a particular Coin. To do that, first, we need to collect the data sample, which serves as our evidence, from an experiment. Second, we use that data to estimate the parameter $$\theta$$. Suppose, to collect the data, we toss the Coin 10 times and record the outcomes. We get a sequence of $$\{H, H, T, H, ..., T\}$$ which consists of 10 elements, in which each element represents the outcome a single coin tossing. By assuming that the previous data sample is independent and identically distributed (often referred to as i.i.d), we then perform statistical computation to determine the estimate of $$\theta$$.
There are two broad categories of estimating the parameter of a known probability distribution. The first one is so called Maximum Likelihood Estimation (MLE) and the second one is Bayesian parameter estimation. We will examine both methods briefly in this post. In the end, we will focus on Bayesian parameter estimation and show the usage of PyMC (Python library for MCMC framework) to estimate the parameter of a Bernoulli distribution.
Maximum Likelihood Estimation (MLE)
Please do not be afraid when you hear the name of this method ! Eventhough the name of this method is somewhat “long-and-complicated”, but the opposite situation actually happens. MLE often involves basic counting of events on our data. As an example, MLE estimates the paramater θ of the Coin using the following, “surprisingly simple”, statistic
$\hat{\theta} = \frac{\# Heads}{\# Heads + \# Tails}$
Because of that, people usually refer MLE as a “Frequentist approach”. In general, MLE aims at seeking a set of parameters which maximizes the likelihood of seeing our data.
$\hat{\theta} = \substack{argmax \\ \theta} P(x_1, x_2, ..., x_n|\theta)$
Now, let us try to implement MLE for estimating the parameter of a Bernoulli distribution (using Python programming language). We simulate the experiment of tossing a Coin N times using a list of integer values, in which 1 and 0 represents Head and Tail, respectively. Each value is generated randomly from a Bernoulli distribution.
$P(H) = P(1) = \theta$
$P(T) = P(0) = 1 - \theta$
We use Bernoulli-like distribution provided by Scipy library. So, we need to import this library as the first step.
-code1-
from scipy.stats import bernoulli
Next, we generate a sample data using the following code.
-code2-
sampleSize = 20
theta = 0.2
def generateSample(t, s):
return bernoulli.rvs(t, size=s)
data = generateSample(theta, sampleSize)
The preceding code will assign “data” with the following value
-code3-
array([1,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0])
We can see that assigning theta to 0.2 makes the number of 0’s much more than the number of 1’s, which means that Tail has higher probability to occur compared to Head.
Now, we pretend that we do not know the parameter $$\theta$$ and we only know the data. Given that data, we are going to estimate the value of $$\theta$$, which is unknown to us. We use MLE, which means that we need to implement the aforementioned statistic
-code4-
def thetaMLE(data):
count = 0
for i in data:
count+=i
return count/float(len(data))
Now, let see several estimates when we use different sample size.
-code5-
def showSeveralEstimates(sampleSizes):
for size in sampleSizes:
estimate = thetaMLE(generateSample(0.2, size))
print("using sample with size %i : theta = %f" % (size,estimate))
showSeveralEstimates([10,100,1000,2000,5000,10000])
The preceding code yields the following results (the results may differ each time you run this program since it involves random sampling):
-code6-
using sample with size 10 : theta = 0.3
using sample with size 100 : theta = 0.23
using sample with size 1000 : theta = 0.194
using sample with size 2000 : theta = 0.1965
using sample with size 5000 : theta = 0.1982
using sample with size 10000 : theta = 0.2006
Look ! we can see that as the size of data increase, the estimate is getting closer to the real value of $$\theta$$, that is $$0.2$$. This concludes that if you want to obtain better estimate, you need to increase your data.
Bayesian Parameter Estimation
Although MLE is often easy to prepare as well as to compute, it has several limitations. One of them is that MLE can not leverage prior information or knowledge regarding the parameter itself. For example, based on our experience, we are really certain that a Coin is fair. Unfortunately, when we try to estimate the parameter using MLE, we cannot incorporate such knowledge to our computation. On the other hand, Bayesian Parameter Estimation takes into account prior knowledge regarding the parameter, which makes Bayesian Parameter Estimation provides more realistic and accurate estimation [1][2]. Sometimes we have prior belief about something before the observance of the data or evidence. But, once we finally see the evidence or data about that, we may change our belief [1][2].
Instead of directly estimating $$P(data|parameter)$$, Bayesian Parameter Estimation estimates $$P(parameter|data)$$. Here, prior information about parameter $$\theta$$ is encoded as a probability distribution $$P(\theta)$$, which means that we consider $$\theta$$ as a value of a random variable. When we quantify uncertainty about $$\theta$$, it will be easy for us to encode our prior belief. After we observe our data, we then change our prior belief towards $$\theta$$, into our posterior belief, denoted as $$P(\theta|X)$$.
$P(\theta|x_1, x_2, ..., x_n) \propto P(\theta) P(x_1, x_2, ..., x_n|\theta)$
In the preceding formula, $$P(\theta)$$ is the prior distribution of $$\theta$$. $$P(X|\theta)$$ is the likelihood of our observed data. The likelihood represents how likely that we will see the observed data when we know already the parameter $$\theta$$. $$P(\theta |X)$$ is the posterior distribution that represents the belief about $$\theta$$ after taking both the data and prior knowledge into account (after we see our data).
We usually use the expected value to give the best estimate of $$\theta$$. In other words, given the data $$X$$, the estimate of $$\theta$$ is obtained by calculating $$E[\theta |X]$$. $$E[\theta |X]$$ is usually called as the Bayes estimator.
$\hat{\theta} = E[\theta |x_1, x_2, ..., x_n]$
Hierarchical Bayesian Model
The prior distribution $$P(\theta)$$ may be estimated using the so called hyperprior distributions. This kind of model is known as Hierarchical Bayesian Model. Furthermore, we can also estimate the hyperprior distribution itself, using hyper-hyperprior distribution, and so on. The reason behind using hyperprior distribution is that, instead we use directly the distribution $$\theta$$, which may be available (from previous experiment), why don’t we let the “present data tell us about $$\theta$$ by themselves” [2].
Let us see the previous example, in which we try to estimate the parameter of Bernoulli parameter $$\theta$$, given the data collected by conducting several tosses of a Coin $$\{H, T, H, H, H, T, ..., T\}.$$ Suppose $$x_i$$ represents the value of a single Coin toss.
$x_i \sim Ber(\theta)$
Now, we can model the parameter $$\theta$$ using Beta distribution. In other words, $$\theta$$ is a random variable that follows Beta distribution with parameter $$\alpha$$ and $$\beta$$. $$\alpha$$ and $$\beta$$ are called hyper-parameters. We use Beta distribution since it is the prior conjugate of a Bernoulli distribution. We will not elaborate more on the notion of conjugacy in this post. However, there are several mathematical reasons behind the use of conjugate prior distributions. One of them is that conjugate prior distribution will make our computation easier.
$\theta \sim Beta(\alpha, \beta)$
The posterior distribution of $$\theta$$ can be then denoted as follows
$P(\theta |x_1, ..., x_n, \alpha, \beta) \propto P(\theta |\alpha, \beta) P(x_1, ..., x_n |\theta, \alpha, \beta)$
We can also represent the preceding model using the well-known plate notation as follows
Where $$N$$ represents the number of tosses that we perform (the size of sample data). We get back to our main goal: estimating $$\theta$$ (the posterior distribution of $$\theta$$) using Bayesian parameter estimation. We have just learnt that the estimation task involves computing the expectation value of $$\theta$$ ($$E[\theta |X]$$), which means that we might need to perform a number of integrations. Unfortunately, in some cases, performing integrations will not be feasible, or at least it will be difficult to achieve a specified accuracy. Thus, we need to think of any approximation methods to back up our plan.
There are many types of numerical approximations for Bayesian parameter estimation. One of them (the most common) is Markov Chain Monte Carlo (MCMC). MCMC estimates the posterior distribution of $$\theta$$ by performing a number of iterations or sampling. In each iteration, we improve the quality of our target distribution by leveraging the sampled data, and hoping that it will eventually arrive at the “true” posterior distribution of $$\theta$$.
PyMC: A Python Library for MCMC Framework
Now, we are ready to play with the programming problem. Python has a library that provides MCMC framework for our problem. This library is called PyMC. You can go directly to its official website, if you want to know more about it.
First, let’s import several libraries that we need, including PyMC and pymc.Matplot for drawing histogram.
-code7-
import pymc as pc
import pymc.Matplot as pt
import numpy as np
from scipy.stats import bernoulli
Next, we need to create our model.
-code8-
def model(data):
theta_prior = pc.Beta('theta_prior', alpha=1.0, beta=1.0)
coin = pc.Bernoulli('coin', p=theta_prior, value=data, observed=True)
mod = pc.Model([theta_prior, coin])
return mod
In the preceding code, we represent $$\theta$$ as “theta_prior”, which follows Beta distribution with parameter $$\alpha$$ and $$\beta$$. Here, we set both $$\alpha$$ and $$\beta$$ with 1.0. “coin” represents a sequence of coin tosses (NOT a single toss), in which each toss follows Bernoulli distribution (This corresponds to $$X$$ in the preceding plate notation). We set “observed=True” since this is our observed data. “p=theta_prior” means that the parameter of “coin” is “theta_prior”. Here, our goal is to estimate the expected value of “theta_prior”, which is unknown. MCMC will perform several iterations to generate the sample from “theta_prior”, in which each iteration will improve the quality of the sample. Finally, we wrap all of our random variables using class Model.
Like the previous one, we need a modul that can generate our toy-sample:
-code9-
def generateSample(t, s):
return bernoulli.rvs(t, size=s)
Suppose, we have already generated a sample, and pretend that we do not know the parameter of the distribution where it comes from. We then use the generated sample to estimate $$\theta$$.
-code10-
def mcmcTraces(data):
mod = model(data)
mc = pc.MCMC(mod)
mc.sample(iter=5000, burn=1000)
return mc.trace('theta_prior')[:]
The preceding procedure/function will produce traces, or MCMC samples generated by a number of interations. Based on that code, MCMC will iterate 5000 times. “burn” specifies the minimum iteration that we need before we are sure that we have achieved the “true” posterior distribution of $$\theta$$. The function yields the traces of MCMC (except the sample generated before the burn-in period).
Now, let’s perform the MCMC run on our model, and plot the posterior distribution of $$\theta$$ on a histogram.
-code11-
sample = generateSample(0.7, 100)
trs = mcmcTraces(sample)
pt.histogram(trs, "theta prior; size=100", datarange=(0.2,0.9))
Suppose the data is generated from a Bernoulli distribution with parameter $$\theta = 0.7$$ (size = 100). If we draw the traces of $$\theta$$ using Histogram, we will get the following figure.
We can see that the distribution is centered in the area 0.65 – 0.80. We are most likely happy with this result (since the prediction somehow close to 0.70), yet the variance is still very high. Now, let see what will happened when we increase the size of our observable data !
The following histogram was generated when we set size to 500:
The following histogram was generated when we set size to 5000:
See ! when we increase the size of our data, then the variance of the distribution gets lower. Thus, we are more confident about our prediction !
If we need the estimate value of $$\theta$$, we can use the expected value (mean) of that distribution. We can use numpy library to get the mean of our sample.
-code12-
#estimated theta
est_theta = np.mean(trs)
print(est_theta)
Main References:
[1] Building Probabilistic Graphical Models with Python, Kiran R. Karkera, PACKT Publishing 2014
[2] Bayesian Inference, Byron Hall (STATISTICAT, LLC)
Alfan Farizki Wicaksono
(firstname [at] cs [dot] ui [dot] ac [dot] id)
Fakultas Ilmu Komputer, UI
Ditulis di Tambun, Bekasi, 26 Juli 2015
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9059616923332214, "perplexity": 698.0311032763543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104636.62/warc/CC-MAIN-20170818121545-20170818141545-00342.warc.gz"}
|
http://openstudy.com/updates/50bfa1efe4b0231994eca012
|
## Got Homework?
### Connect with other students for help. It's a free community.
• across
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
55 members online
• 0 viewing
## scottman Group Title How do I find the complex cube root of 8? one year ago one year ago Edit Question Delete Cancel Submit
• This Question is Closed
1. satellite73
Best Response
You've already chosen the best response.
1
method one solve $x^3=8$ $x^3-8=0$ $(x-2)(x^2+2x+4)=0$ quadratic formula will give the solutions to $x^2+2x+4=0$
• one year ago
2. scottman
Best Response
You've already chosen the best response.
0
I see and so 2 is one root and the other 2 complex numbers will be the other 2 roots?
• one year ago
3. kerstie
Best Response
You've already chosen the best response.
0
@Satellite73
• one year ago
4. satellite73
Best Response
You've already chosen the best response.
1
method two solve $$x^3=1$$ and multiply by two since one solution is 1, divide the unit circle in two three equal parts one is at 1, the other two are $$-\frac{1}{2}+\frac{\sqrt{3}}{2}i$$ and its conjugate $$-\frac{1}{2}-\frac{\sqrt{3}}{2}i$$ multiply these numbers by 2
• one year ago
5. satellite73
Best Response
You've already chosen the best response.
1
to answer your question, yes, the solution to the quadratic will give you the two complex roots
• one year ago
6. satellite73
Best Response
You've already chosen the best response.
1
@kerstie hello
• one year ago
7. scottman
Best Response
You've already chosen the best response.
0
ok thanks a lot, you helped me greatly
• one year ago
8. satellite73
Best Response
You've already chosen the best response.
1
yw
• one year ago
• Attachments:
## See more questions >>>
##### spraguer (Moderator) 5→ View Detailed Profile
23
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9985096454620361, "perplexity": 6859.691229806006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119647865.10/warc/CC-MAIN-20141024030047-00214-ip-10-16-133-185.ec2.internal.warc.gz"}
|
http://scitation.aip.org/content/aip/journal/jap/113/15/10.1063/1.4801917
|
• journal/journal.article
• aip/jap
• /content/aip/journal/jap/113/15/10.1063/1.4801917
• jap.aip.org
1887
No data available.
No metrics data to plot.
The attempt to plot a graph for these metrics has failed.
Equivalence of Kröner and weighted Voigt-Reuss models for x-ray stress determination
USD
10.1063/1.4801917
View Affiliations Hide Affiliations
Affiliations:
1 IBM T.J. Watson Research Center, Yorktown Heights, New York 10598, USA
J. Appl. Phys. 113, 153509 (2013)
/content/aip/journal/jap/113/15/10.1063/1.4801917
http://aip.metastore.ingenta.com/content/aip/journal/jap/113/15/10.1063/1.4801917
## Figures
FIG. 1.
Plot of ½S2 under Reuss, Voigt, Neerfeld and Kröner limits for Cu as a function of Γ. xKr is depicted graphically as the ratio of to . All of these XEC intersect at approximately Γ = 0.276, corresponding to the vertical dotted line. The single crystal elastic constants of Cu (from Ref. ) can be found in Table .
FIG. 2.
(a) Weighted Reuss-Voigt fraction values, xKr, for various materials possessing cubic elastic symmetry plotted as a function of the anisotropy factor, A = 2 CC 1212/(CC 1111-CC 1122). Single crystal elastic constants were obtained from Ref. ; (b) linear fit of weighted Reuss-Voigt fraction values for A < 5.
FIG. 3.
Weighted Reuss-Voigt fraction values, xKr, for various materials possessing cubic elastic symmetry plotted as a function of the dimensionless parameter, Q = (CC 1111-CC 1122)/(CC 1111 + 2CC 1122). The asymptote 5/9 corresponds to the case of Q → ∞ and C2 ≪ C1.
FIG. 4.
Plot of the weighted Reuss-Voigt fraction, xKr, of ½S2 for Ti as a function of the orientation parameter, η2, where squares correspond to specific (hkil) reflections. The continuous curve corresponds to values calculated using Eq. .
FIG. 5.
Plot of the Voigt, Reuss and Kröner limit ½S2 values for Ti as a function of η2, where the shaded portion delineates the range of η2 in which the Kröner XEC lies inside those under the Voigt and Reuss limits.
## Tables
Table I.
Single crystal elastic constants and weighted Voigt-Reuss fraction corresponding to the Kröner limit, xKr, for several cubic materials.
/content/aip/journal/jap/113/15/10.1063/1.4801917
2013-04-18
2014-04-18
Article
content/aip/journal/jap
Journal
5
3
### Most cited this month
More Less
This is a required field
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8799149394035339, "perplexity": 7793.261677358934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
|
http://www.cut-the-knot.org/m/Geometry/ThreeSquares.shtml
|
Extras in Bottema's Configuration
22 April 2015, Created with GeoGebra
Problem 1
Given three squares $BCDE,$ $ABIF,$ and $ACJG,$ the latter two with centers $O$ and $O'.$ Let $P$ be the midpoint of $DG,$ $N$ that of $EF,$ $S$ the intersection of $NO'$ and $OP.$
Prove that $AS$ passes through $M,$ the midpoint of $DE.$
Solution
We'll use complex numbers and shall not distinguish between points and the associated complex numbers.
Scaling if necessary, we may define $B=1,$ $C=-1,$ $A=a-bi,$ where are $a,b$ are arbitrary real numbers.
With this, we obtain further values: $D=-1-2i,$ $E=1-2i,$ $M=-2i,$
\begin{align} G&=i(C-A)+A\\ &=i(-1-a+bi)+(a-bi)\\ &=(a-b)-i(1+a+b). \end{align}
Similarly,
\begin{align} F&=-i(B-A)+A\\ &=-i(1-a+bi)+(a-bi)\\ &=(a+b)+i(-1+a-b). \end{align}
Now we can find all the midpoints:
\begin{align} O&=\frac{1}{2}(B+F)=\frac{1}{2}[(1+a+b)+i(-1+a-b)],\\ O'&=\frac{1}{2}(C+G)=\frac{1}{2}[(-1+a-b)-i(1+a+b)]. \end{align}
(Note that $iO'=O.)$ Further,
\begin{align} P&=\frac{1}{2}(D+G)=\frac{1}{2}[(-1+a-b)-i(-3+a+b)],\\ N&=\frac{1}{2}(E+F)=\frac{1}{2}[(1+a+b)-i(3-a+b)]. \end{align}
Finally, we'll show that $S$ is the midpoint of both $NO'$ and $OP:$
\begin{align} \frac{1}{2}(N+O')&=\frac{1}{2}[a-i(2+b)],\\ \frac{1}{2}(O+P)&=\frac{1}{2}[a-i(2+b)], \end{align}
which proves that $ONGO'$ is a parallelogram. But there is more: $S$ happens to be the midpoint of $A$ and $M.$ Indeed, directly
\begin{align} \frac{1}{2}(A+M)&=\frac{1}{2}[a-i(2+b)]. \end{align}
Thus not only $AS$ passes through $M,$ $AM$ is divided by $S$ in half as are $NO'$ and $PO,$ making $AO'PMNO$ a parahexagon.
Note also that $ONGO'$ becomes a rectangle for $A$ on the perpendicular bisector of $BC,$ i.e., when the two small squares are equal. It is a square (equal to the two small squares at that) when $A$ lies on $BC.$
Problem 2
Given two squares $ABIF$ and $ACJG,$ join $I$ and $J,$ and erect perpendiculars to $BC$ at $I,$ $B,$ $C,$ and $J.$ Let $X,Y,U,V$ be the intersections as shown below.
Then $JU=IW.$
Solution
Add a perpendicular to $BC$ through $A:$
Then $\Delta CJX=\Delta ACZ,$ implying $CX=AZ.$ Also, $\Delta BIY=\Delta ABZ,$ implying $BY=AZ.$ It follows that $BY=CX$ and, therefore, $IW=JU$ as having equal projections on the same line $BC.$
Acknowledgment
The two problems problem which are due to Ruben Dario from the Peru Geometrico group has been communicated to me by Leo Giugiuc. I decided to place them on the same page since both relate to Bottema's theorem. Solution to the first is by Leo Giugiuc.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 6, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9522141218185425, "perplexity": 532.5058289133092}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825436.78/warc/CC-MAIN-20171022184824-20171022204824-00893.warc.gz"}
|
https://codereview.stackexchange.com/questions/157741/a-shell-script-to-mount-a-disk-image-file
|
# A shell script to mount a disk image file
I'm trying to create a bash shell script which mounts a disk image file. Not only that but checks to see if the disk image already exists. Is there anyway I could improve my script? Currently my script looks like this:
#!/bin/bash
MOUNTPOINT="/myfilesystem"
if [ ! -d "${MOUNTPOINT}" ]; then if [ -e "${MOUNTPOINT}" ]; then
echo "Mountpoint exists, but isn't a directory..."
exit 1
else
sudo mkdir -p "${MOUNTPOINT}" fi fi if [ "$(echo ${MOUNTPOINT}/*)" != "${MOUNTPOINT}/*" ]; then
echo "Mountpoint is not empty!"
exit 1
fi
if mountpoint -q "${MOUNTPOINT}"; then echo "Already mounted..." exit 0 fi ## 2 Answers The first conditional group can be reorganized to make it flatter and simpler: if [ ! -e "${MOUNTPOINT}" ]; then
sudo mkdir -p "${MOUNTPOINT}" elif [ -d "${MOUNTPOINT}" ]; then
echo "Mountpoint exists, but isn't a directory..."
exit 1
fi
The exit 0 is pointless in the last condition, and as such, you could replace it with a one-liner using &&:
mountpoint -q "${MOUNTPOINT}" && echo "Already mounted..." • Error messages should be printed on standard error, like echo "Chocolate missing!" >&2 • Internal variables should be lowercase, like mount_point. Don't worry if it ends up being the same string as a command like mountpoint; Bash handles that just fine (and your IDE should highlight them to show the difference). • The idiomatic Bash shebang line is #!/usr/bin/env bash (disclaimer: my answer) • sudo is generally discouraged within scripts, for at least two reasons: 1. The target system may not have sudo. This is uncommon, but some use su instead. 2. It makes it harder to automate the use of the script, since sudo may prompt for a password. • It is generally expected of a shell script that it will not have unintended side effects. Your script will create the mount point directory if it doesn't exist. • Instead of hard coding your mount point you can easily make the script … scriptable: for mount_point do [the contents of your script] done this will loop through all the arguments you provide ("$@")
• I personally prefer moving the then and do keywords to a separate line, to avoid having to use ;` in my scripts.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1599307358264923, "perplexity": 9509.200153816582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304600.9/warc/CC-MAIN-20220124185733-20220124215733-00627.warc.gz"}
|
https://readdy.github.io/validation/lennard_jones_fluid
|
# Lennard-Jones fluid - thermodynamic state variables
We consider a simple fluid of particles that interact due to the Lennard Jones potential
$$U(r) = 4\varepsilon \left[ \left(\frac{\sigma}{r}\right)^{12} - \left(\frac{\sigma}{r}\right)^6 \right]$$
To validate the integration of the diffusion process and correct implementation of energies and forces, we compare observables to results from the literature. The observables are: the mean potential energy of the system per particle $U$, the pressure $P$ and the radial distribution function $g(r)$.
The thermodynamic quantities of a Lennard Jones fluid are typically expressed in terms of the rescaled density $\rho^* = (N/V)\sigma^3$ and the rescaled temperature $T^* = k_BT/\varepsilon$, where $N$ is the number of particles in the system, which is constant over time, and $V$ is the available volume. In simulation practice we set $\sigma=1$ and $\varepsilon=1$ to achieve the reduced units. The quantity $\sigma^2 / 4D$ gives rise to a typical time scale, where $D$ is the self diffusion coefficient of the particles. In practice we set the diffusion coefficient to 1.
We use an Euler-Maruyama scheme to integrate the positions of particles according to the overdamped Langevin equation of motion, in contrast to an integration of positions and momenta in the underdamped limit.
The pressure can be measured from the acting forces according to [4]
$$PV = Nk_BT + \langle \mathscr{W} \rangle$$
where
$$\mathscr{W} = \frac{1}{3} \sum_i \sum_{j>i} \mathbf{r}_{ij} \mathbf{f}_{ij},$$
is the virial that describes the deviation from ideal-gas-behavior. It is measured in terms of pairwise acting forces $\mathbf{f}_{ij}$ between particles $i$ and $j$ which are separated by $\mathbf{r}_{ij}$. This is implemented by ReaDDy's pressure observable.
Results
origin cutoff radius $r_c$ density $\rho$ temperature $T$ pressure $P$ potential energy per particle $U$
[1] 4 0.3 3 1.023(2) -1.673(2)
[2] 4 0.3 3 1.0245 -1.6717
HALMD [3] 4 0.3 3 1.0234(3) -1.6731(4)
ReaDDy 4 0.3 3 1.035(7) -1.679(3)
[1] 4 0.6 3 3.69(1) -3.212(3)
[2] 4 0.6 3 3.7165 -3.2065
HALMD [3] 4 0.6 3 3.6976(8) -3.2121(2)
ReaDDy 4 0.6 3 3.70(2) -3.208(7)
[1] Molecular dynamics simulations, J. K. Johnson, J. A. Zollweg, and K. E. Gubbins, The Lennard-Jones equation of state revisited, Mol. Phys. 78, 591 (1993)
[2] Integral equations theory, A. Ayadim, M. Oettel, and S Amokrane, Optimum free energy in the reference functional approach for the integral equations theory, J. Phys.: Condens. Matter 21, 115103 (2009).
[3] HAL's MD package, http://halmd.org/validation.html
[4] Allen, M. P., & Tildesley, D. J. (1987). Computer Simulation of Liquids. New York: Oxford University Press.
In [1]:
import os
import numpy as np
v2.0.2-103
Utility methods
In [2]:
def average_across_first_axis(values):
n_values = len(values)
mean = np.sum(values, axis=0) / n_values # shape = n_bins
difference = values - mean # broadcasting starts with last axis
std_dev = np.sqrt(np.sum(difference * difference, axis=0) / n_values)
std_err = np.sqrt(np.sum(difference * difference, axis=0) / n_values ** 2)
return mean, std_dev, std_err
def lj_system(edge_length, temperature=1.):
box_size=[edge_length, edge_length, edge_length],
unit_system=None
)
system.kbt = temperature
system.potentials.add_lennard_jones("A", "A", m=12, n=6, epsilon=1., sigma=1., cutoff=4., shift=True)
return system
Wrap the whole simulation and analysis in a function and perform it for the two densities 0.3 and 0.6.
In [3]:
def equilibrate_and_measure(density=0.3):
n_per_axis = 12
n_particles = n_per_axis ** 3
edge_length = (n_particles / density) ** (1. / 3.)
pos_x = np.linspace(-edge_length/2., edge_length/2.-1., n_per_axis)
pos = []
for x in pos_x:
for y in pos_x:
for z in pos_x:
pos.append([x,y,z])
pos = np.array(pos)
print("n_particles", len(pos), "box edge_length", edge_length)
assert len(pos)==n_particles
def pos_callback(x):
nonlocal pos
n = len(x)
pos = np.zeros((n,3))
for i in range(n):
pos[i][0] = x[i][0]
pos[i][1] = x[i][1]
pos[i][2] = x[i][2]
print("saved positions")
# create system
system = lj_system(edge_length, temperature=3.)
# equilibrate
sim = system.simulation(kernel="CPU")
sim.observe.particle_positions(2000, callback=pos_callback, save=None)
sim.observe.energy(500, callback=lambda x: print(x), save=None)
sim.record_trajectory(stride=1)
sim.output_file = "lj_eq.h5"
if os.path.exists(sim.output_file):
os.remove(sim.output_file)
sim.run(n_steps=10000, timestep=1e-4)
# measure
sim = system.simulation(kernel="CPU")
sim.observe.energy(200)
sim.observe.pressure(200)
sim.observe.rdf(
200, bin_borders=np.linspace(0.5, 4., 50),
types_count_from="A", types_count_to="A", particle_to_density=density)
sim.output_file = "lj_measure.h5"
if os.path.exists(sim.output_file):
os.remove(sim.output_file)
sim.run(n_steps=10000, timestep=1e-4)
# obtain results
energy_mean, _, energy_err = average_across_first_axis(energy) # time average
energy_mean /= n_particles
energy_err /= n_particles
pressure_mean, _, pressure_err = average_across_first_axis(pressure) # time average
rdf_mean, _, rdf_err = average_across_first_axis(rdf) # time average
return {
"energy_mean": energy_mean, "energy_err": energy_err,
"pressure_mean": pressure_mean, "pressure_err": pressure_err,
"rdf_mean": rdf_mean, "rdf_err": rdf_err, "rdf_bin_centers": bin_centers
}
In [ ]:
result_low_dens = equilibrate_and_measure(density=0.3)
In [5]:
result_low_dens
Out[5]:
{'energy_mean': -1.6633748459442572,
'energy_err': 0.003700932686850724,
'pressure_mean': 1.0180387240087916,
'pressure_err': 0.006649187919330464,
'rdf_mean': array([0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
8.73771465e-04, 1.39050761e-01, 7.77121707e-01, 1.35550697e+00,
1.47394443e+00, 1.40574566e+00, 1.28072743e+00, 1.17403174e+00,
1.09877314e+00, 1.04211586e+00, 1.00661263e+00, 9.69183397e-01,
9.56833466e-01, 9.55143798e-01, 9.59051739e-01, 9.63753456e-01,
9.75949419e-01, 9.88910358e-01, 1.01500161e+00, 1.01482771e+00,
1.01116499e+00, 1.00857361e+00, 1.01259469e+00, 1.00827253e+00,
1.00810550e+00, 1.00061586e+00, 1.00449665e+00, 1.00104653e+00,
9.98569960e-01, 1.00111624e+00, 9.99901194e-01, 1.00192471e+00,
1.00256785e+00, 9.96061189e-01, 1.00175797e+00, 9.98210922e-01,
1.00212448e+00, 9.97903933e-01, 1.00252626e+00, 9.97402108e-01,
9.99352217e-01, 1.00306833e+00, 1.00450091e+00, 9.96851149e-01,
1.00616989e+00]),
'rdf_err': array([0. , 0. , 0. , 0. , 0.00030675,
0.00396483, 0.00923446, 0.01010521, 0.00899745, 0.00763738,
0.00711674, 0.00618417, 0.00605006, 0.00575771, 0.00540888,
0.00501564, 0.00443238, 0.00435785, 0.00389686, 0.00442619,
0.00385527, 0.00381415, 0.00429112, 0.00437909, 0.00400882,
0.00341835, 0.00299592, 0.00383812, 0.00318276, 0.00269004,
0.00337694, 0.00319706, 0.00261357, 0.00246254, 0.00284636,
0.00274702, 0.00360729, 0.00273072, 0.00348141, 0.00249099,
0.00276968, 0.00235732, 0.00259392, 0.0022092 , 0.00224786,
0.00249379, 0.00213182, 0.00181033, 0.00215059]),
'rdf_bin_centers': array([0.53571429, 0.60714286, 0.67857143, 0.75 , 0.82142857,
0.89285714, 0.96428571, 1.03571429, 1.10714286, 1.17857143,
1.25 , 1.32142857, 1.39285714, 1.46428571, 1.53571429,
1.60714286, 1.67857143, 1.75 , 1.82142857, 1.89285714,
1.96428571, 2.03571429, 2.10714286, 2.17857143, 2.25 ,
2.32142857, 2.39285714, 2.46428571, 2.53571429, 2.60714286,
2.67857143, 2.75 , 2.82142857, 2.89285714, 2.96428571,
3.03571429, 3.10714286, 3.17857143, 3.25 , 3.32142857,
3.39285714, 3.46428571, 3.53571429, 3.60714286, 3.67857143,
3.75 , 3.82142857, 3.89285714, 3.96428571])}
In [ ]:
result_hi_dens = equilibrate_and_measure(density=0.6)
In [7]:
result_hi_dens
Out[7]:
{'energy_mean': -3.196356019125088,
'energy_err': 0.005104397290158243,
'pressure_mean': 3.7676811569266397,
'pressure_err': 0.016351902602610355,
'rdf_mean': array([0. , 0. , 0. , 0. , 0.0025589 ,
0.21776871, 1.10357986, 1.72461953, 1.72278639, 1.50528089,
1.28779136, 1.12018195, 1.00522172, 0.9256746 , 0.88286723,
0.85705591, 0.86490041, 0.88784101, 0.92720019, 0.97757109,
1.02364833, 1.05275071, 1.06584753, 1.05618891, 1.04527277,
1.02953571, 1.00498551, 0.98527115, 0.98062792, 0.97275726,
0.97639494, 0.98442037, 0.98816345, 0.99719895, 1.0057899 ,
1.01017784, 1.01483666, 1.01094614, 1.00672468, 1.002867 ,
0.99891791, 0.99768625, 0.99587933, 0.99519994, 0.99237075,
0.99940666, 0.99912237, 0.99836934, 1.00126852]),
'rdf_err': array([0. , 0. , 0. , 0. , 0.00035327,
0.00307937, 0.00598724, 0.00770554, 0.00632459, 0.00657016,
0.00504308, 0.00449333, 0.0040328 , 0.00451565, 0.00347907,
0.00382738, 0.00380265, 0.00389827, 0.00313076, 0.00312584,
0.00274444, 0.00320238, 0.00299662, 0.00245097, 0.00250133,
0.00319461, 0.00233672, 0.00236277, 0.00261564, 0.00215941,
0.00205376, 0.00247026, 0.0019712 , 0.00213257, 0.00216365,
0.00198214, 0.00179956, 0.00198444, 0.00177342, 0.00148938,
0.00195599, 0.00165285, 0.00189096, 0.00163013, 0.00147786,
0.00156586, 0.00161101, 0.00165896, 0.00163258]),
'rdf_bin_centers': array([0.53571429, 0.60714286, 0.67857143, 0.75 , 0.82142857,
0.89285714, 0.96428571, 1.03571429, 1.10714286, 1.17857143,
1.25 , 1.32142857, 1.39285714, 1.46428571, 1.53571429,
1.60714286, 1.67857143, 1.75 , 1.82142857, 1.89285714,
1.96428571, 2.03571429, 2.10714286, 2.17857143, 2.25 ,
2.32142857, 2.39285714, 2.46428571, 2.53571429, 2.60714286,
2.67857143, 2.75 , 2.82142857, 2.89285714, 2.96428571,
3.03571429, 3.10714286, 3.17857143, 3.25 , 3.32142857,
3.39285714, 3.46428571, 3.53571429, 3.60714286, 3.67857143,
3.75 , 3.82142857, 3.89285714, 3.96428571])}
In [8]:
%matplotlib inline
import matplotlib.pyplot as plt
print("density 0.3:")
print("mean energy per particle {}\nerr energy per particle {}".format(
result_low_dens["energy_mean"], result_low_dens["energy_err"]))
print("pressure {}\nerr pressure {}".format(
result_low_dens["pressure_mean"], result_low_dens["pressure_err"]))
print("density 0.6:")
print("mean energy per particle {}\nerr energy per particle {}".format(
result_hi_dens["energy_mean"], result_hi_dens["energy_err"]))
print("pressure {}\nerr pressure {}".format(
result_hi_dens["pressure_mean"], result_hi_dens["pressure_err"]))
plt.plot(result_low_dens["rdf_bin_centers"], result_low_dens["rdf_mean"], label=r"density $\rho=0.3$")
plt.plot(result_hi_dens["rdf_bin_centers"], result_hi_dens["rdf_mean"], label=r"density $\rho=0.6$")
plt.xlabel(r"distance $r/\sigma$")
plt.ylabel(r"radial distribution $g(r)$")
plt.legend(loc="best")
plt.show()
density 0.3:
mean energy per particle -1.6633748459442572
err energy per particle 0.003700932686850724
pressure 1.0180387240087916
err pressure 0.006649187919330464
density 0.6:
mean energy per particle -3.196356019125088
err energy per particle 0.005104397290158243
pressure 3.7676811569266397
err pressure 0.016351902602610355
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20525139570236206, "perplexity": 10456.08945345174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657138718.61/warc/CC-MAIN-20200712113546-20200712143546-00304.warc.gz"}
|
https://foster-family.jp/chihou/chihou17/gaiyou/21gifuken.htm
|
PVNx(2005Nxj@ЉI{̎qǂ̏
@ @ 21 @ @ 18N3
̐eĂȂqǂ5.6%eƒň炿AS61̂̏ʂ͑43ʂłB
˗eϑA{ݓ̔r @ @ @
˗eϑPTւ̓c @ @ @
˗eA{{݁E@ [̔r @ @ @
˓s{ߎsʁ@o^eȂǂ̐()ւ̃N @ @ @
eϑA{ݓ̔r TOP
eĂ邱ƂoȂqǂAǂň̂̊Ă܂Bqǂ̌20uƒqǂ̉ƒňvɋts{̎{ݒŠ́AAqǂ̌ψPĂ܂B
s{s {{ݎ @ v S @ @
34l 536l 35l 605l 43/61 @ @
5.6% 88.6% 5.8% 100% @ @ @
S 3,293l 29,850l 3,008l 36,151l @ @ @
9.1% 82.6% 8.3% 100% @ @ @
@
@ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
eϑPTւ̓c TOP
@JȂ́AuqǂEqĉvv̐lڕWƂāA21Nx܂łɗeϑ15.0v𗧂Ă܂BS̗̎̂eϑ15ȂAv̒B͓ł傤BZ܂̎̂̎ƒɁAǂ̂悤Ȍv𗧂ĂẮAAĉBЉI{KvƂqǂƗeϑĂqǂ̎тA\loAeϑ15Bo邩Z܂B
s{s @ {{ݎ @ v{쎙v @
@ 14Nx 5.1% 27l 472l 26l 525l @
@ 15Nx 5.0% 28l 504l 32l 564l @
16Nx 4.3% 25l 520l 36l 581l @
@ 17Nx 5.6% 34l 536l 35l 605l @
@15%܂ŁA69lȏϑ𑝂₷Kv܂ 18Nx\ 5.2% 33l 600l 633l \l
19Nx\ 5.3% 35l 624l 659l \l
20Nx\ 5.3% 37l 648l 684l \l
21Nx\ 5.4% 38l 672l 710l \l
@ 14Nx 7.4% 2,517l 28,983l 2,689l 34,189l @
@ 15Nx 8.1% 2,811l 29,134l 2,746l 34,691l @
S 16Nx 8.4% 3,022l 29,809l 2,934l 35,765l @
@ 17Nx 9.1% 3,293l 29,850l 3,008l 36,151l @
@15%܂ŁA1547lȏϑ𑝂₷Kv܂ 18Nx\ 9.6% 3,546l 33,394l 36,939l \l
19Nx\ 10.1% 3,799l 33,836l 37,635l \l
20Nx\ 10.6% 4,053l 34,278l 38,331l \l
21Nx\ 11.0% 4,307l 34,720l 39,027l \l
\lExcelTRENDgp
@
@ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
eւ̎ϑA{{݁E@ [̔r TOP
@o^e𗢐eϑ̒Ɖ肵Aeւ̎ϑƎ{{݁E@̒[Ƃr܂B{{݁E@̒ςłĂAe֎ϑ́AႢƂ낪̂͂Ȃł傤B
@ s{ o^e ϑe ϑ ψϑ ϑ
S
ϑ
e 151ƒ 28ƒ 18.5% 1.2l 51 34l
S 7,737ƒ 2,370ƒ 30.6% 1.4l @ 3,293l
@ @ @ @ @ @ @ @
@ s{ @ [ ϒ [
S
{{ 586l 568l 96.9% 59 5 10
S 33,676l 30,830l 91.5% 60 @ 558
@ 35l 35l 100.0% 18 #REF! 2
S 3,669l 3,077l 83.9% 31 @ 117
@
@ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@
@ @ @ @ 2007/10/28 by sido ( http://foster-family.jp/ )
@ @ @ @ @ @ @ @
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000085830688477, "perplexity": 650.4522767643049}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363689.56/warc/CC-MAIN-20211209061259-20211209091259-00375.warc.gz"}
|
https://ryansblog.xyz/post/587646c5-9cb9-469f-81d6-a30862678201
|
# Java - Package Private Method
In this post, we will take a close look at the package-private method in Java. Many people think that the introducing this level of access to java was a mistake and there are many disccions on this topic on the internet. I am not sure if it is really a mistake but it does requires efforts to understand the concept. According to the official website, only the class and the package where this class is defined can access packge-private method of this class. The subclass of this class cannot access it. The structure of the code is the following:
- packageA
- Base
* protectedMethod
* packagePrivateMethod
- Derived
* protectedMethod
* packagePrivateMethod
- packageB
- Derived
* protectedMethod
* packagePrivateMethod
The Base class is defined in packageA. It has two methods: (1) protedMethod and (2) packagePrivateMethod.
package methodOverride.packageA;
public class Base {
protected void protectedMethod() {
System.out.println("Base.protectedMethod()");
}
void packagePrivateMethod(){
System.out.println("Base.packagePrivateMethod()");
}
}
We also define a derived class in packageA. Both methods can be overridden. Here comes the confusing part. As mentioned previously, the derived class should not have the access to the package-private method of its base class but in this case because it is in the same package(packageA) it can access and override the Base.packagePrivateMethod.
package methodOverride.packageA;
public class Derived extends Base{
@Override
protected void protectedMethod() {
System.out.println("Derived.protectedMethod()");
}
@Override
void packagePrivateMethod(){
System.out.println("Derived.packagePrivateMethod()");
}
public static void main(String[] args) {
Base b1 = new Base();
Base b2 = new Derived();
Derived d1 = new Derived();
System.out.println("\nBase reference and Base instance");
b1.protectedMethod();
b1.packagePrivateMethod();
System.out.println("\nBase reference and Derived instance");
b2.protectedMethod();
b2.packagePrivateMethod();
System.out.println("\nDerived reference and Derived instance");
d1.protectedMethod();
d1.packagePrivateMethod();
}
}
We now define a derived class in a different package called packageB. As we can see the code blow, we can still override the protectedMethod but this can we can no longer override the packagePrivateMethod in packageA.Base. The reason is that derived class cannot access the package-private method of its base class and we are not in the same package where the base class is defined.
package methodOverride.packageB;
import methodOverride.packageA.Base;
public class Derived extends Base{
@Override
protected void protectedMethod() {
System.out.println("Derived.protectedMethod()");
}
void packagePrivateMethod() {
System.out.println("Derived.packageAccessMethod()");
}
}
If used in the same package, the package-private methods behave almost exactly the same compare to protected methods. Both of them are virtual methods; both of them are accessible everywhere in the same package(in our example is packageA). The code below shows the use of the two methods:
package methodOverride.packageA;
public class Derived extends Base{
@Override
protected void protectedMethod() {
System.out.println("Derived.protectedMethod()");
}
@Override
void packagePrivateMethod(){
System.out.println("Derived.packagePrivateMethod()");
}
public static void main(String[] args) {
Base b1 = new Base();
Base b2 = new Derived();
Derived d1 = new Derived();
System.out.println("\nBase reference and Base instance");
b1.protectedMethod();
b1.packagePrivateMethod();
System.out.println("\nBase reference and Derived instance");
b2.protectedMethod();
b2.packagePrivateMethod();
System.out.println("\nDerived reference and Derived instance");
d1.protectedMethod();
d1.packagePrivateMethod();
}
}
Here is the output of the above program.
Base reference and Base instance
Base.protectedMethod()
Base.packagePrivateMethod()
Base reference and Derived instance
Derived.protectedMethod()
Derived.packagePrivateMethod()
Derived reference and Derived instance
Derived.protectedMethod()
Derived.packagePrivateMethod()
Because package-private methods are not visible to other packages, we cannot use them in other places.
### Conclusion
If used in the same package, package-private methods behave almost the same as protected methods. Because they are not visiable to other packages, it can only be used in other packages. Therefore the difference between the two is that protected methods are part of the API of the class(in the sense that they are exposed to the users of the class and they become parts of the developers' responsibility) while package-private methods are note. Or in other words, we can think package-private methods as proteced methods without being in the API. Essentially, if we want to use the polymorphism feature of Java but do not want to expose the details of the implementation, we use package-priavte methods.
----- END -----
Welcome to join reddit self-learning community.
Want some fun stuff?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4212692379951477, "perplexity": 2484.991617989303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662531762.30/warc/CC-MAIN-20220520061824-20220520091824-00199.warc.gz"}
|
http://gmatclub.com/forum/calling-all-duke-2008-applicants-50217-260.html?kudos=1
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 27 May 2015, 00:52
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Calling all Duke 2008 applicants...
Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
VP
Joined: 06 Feb 2007
Posts: 1023
Followers: 20
Kudos [?]: 132 [0], given: 0
Re: Calling all Duke 2008 applicants... [#permalink] 23 Jan 2008, 20:28
mba2010 wrote:
Just got a call from a Duke 2Y welcoming me (I think this is always a nice touch).
Blue Devil Weekend is March 28-31. She also said the admit pkg was coming in 2-3 weeks...so either there are two separate packages or she was mistaken.
Just checked the mail and haven't gotten my packet yet. Anything in it besides the letter, nervous?
Hmmm... weird! Not sure what other package can be there!
The package contained a welcome binder, a small writing pad and a Duke MBA sticker.
Senior Manager
Joined: 26 Jul 2007
Posts: 371
Followers: 2
Kudos [?]: 4 [0], given: 0
Re: Calling all Duke 2008 applicants... [#permalink] 28 Jan 2008, 08:02
Got my "Your Duke App is complete" e-mail over the weekend.
Manager
Joined: 14 Jan 2007
Posts: 170
Followers: 2
Kudos [?]: 7 [0], given: 0
Re: Calling all Duke 2008 applicants... [#permalink] 28 Jan 2008, 08:15
rajpdsouza wrote:
Got my "Your Duke App is complete" e-mail over the weekend.
me too.
Director
Joined: 24 Oct 2005
Posts: 576
Location: NYC
Followers: 1
Kudos [?]: 25 [0], given: 0
Re: Calling all Duke 2008 applicants... [#permalink] 28 Jan 2008, 09:07
and me three
_________________
Success is my only option, failure is not -- Eminem
VP
Joined: 06 Feb 2007
Posts: 1023
Followers: 20
Kudos [?]: 132 [0], given: 0
Re: Calling all Duke 2008 applicants... [#permalink] 28 Jan 2008, 13:52
Got a scholarship e-mail from Duke. Unfortunately can't open it due to the firewall at work but people on BW are saying that the e-mail doesn't state the exact $$. Current Student Joined: 17 May 2007 Posts: 385 Followers: 3 Kudos [?]: 7 [0], given: 0 Re: Calling all Duke 2008 applicants... [#permalink] 28 Jan 2008, 14:00 nervousgmat wrote: Got a scholarship e-mail from Duke. Unfortunately can't open it due to the firewall at work but people on BW are saying that the e-mail doesn't state the exact$$$. I got the same email. The actual amount is in snail mail that went out today...lame! Intern Joined: 23 Jan 2008 Posts: 28 Followers: 0 Kudos [?]: 4 [0], given: 0 Re: Calling all Duke 2008 applicants... [#permalink] 28 Jan 2008, 14:05 Same here. It's weird that they would say you got one, but not how much. Wonder if a call to the admissions office would be out of place? Looks like they timed it so it would go out at end of day for the AdCom, though. Intern Joined: 06 Nov 2007 Posts: 6 Followers: 0 Kudos [?]: 0 [0], given: 0 Re: Calling all Duke 2008 applicants... [#permalink] 28 Jan 2008, 18:09 D'u know what is the mean amount of scholarships last year? What is the difference between institutional scholarships and tuition scholarships? SVP Joined: 01 Nov 2006 Posts: 1855 Schools: The Duke MBA, Class of 2009 Followers: 16 Kudos [?]: 181 [0], given: 2 Re: Calling all Duke 2008 applicants... [#permalink] 29 Jan 2008, 06:22 last year we could access the scholarship letter from their online application site - I didn't have to wait for the email. VP Joined: 06 Feb 2007 Posts: 1023 Followers: 20 Kudos [?]: 132 [0], given: 0 Re: Calling all Duke 2008 applicants... [#permalink] 31 Jan 2008, 08:22 Has anyone received a letter from Duke with a scholarship amount yet? Intern Joined: 31 Jan 2008 Posts: 45 Followers: 0 Kudos [?]: 2 [0], given: 0 Re: Calling all Duke 2008 applicants... [#permalink] 31 Jan 2008, 12:18 Hi, I'm a R1 admit interested in HSM. I received the scholarship email too but no mail yet... I thought they might FedEx it like they did the admission packet...but I guess not. From reading this thread and the posts on the BW forum it seems there are a lot of scholarship offers out there for Fuqua R1 so I'm not getting my hopes up in terms of dollar value. TJ Current Student Joined: 25 Jun 2006 Posts: 110 Followers: 1 Kudos [?]: 6 [0], given: 0 Re: Calling all Duke 2008 applicants... [#permalink] 31 Jan 2008, 16:46 nervousgmat wrote: Has anyone received a letter from Duke with a scholarship amount yet? I received mine today Current Student Joined: 17 May 2007 Posts: 385 Followers: 3 Kudos [?]: 7 [0], given: 0 Re: Calling all Duke 2008 applicants... [#permalink] 01 Feb 2008, 13:39 Got my scholarship letter today.$26k per year. What'd you guys get?
I'm surprised, since I was expecting something along the lines of $5k VP Joined: 06 Feb 2007 Posts: 1023 Followers: 20 Kudos [?]: 132 [0], given: 0 Re: Calling all Duke 2008 applicants... [#permalink] 01 Feb 2008, 13:50 mba2010 wrote: Got my scholarship letter today.$26k per year. What'd you guys get?
I'm surprised, since I was expecting something along the lines of $5k Whoa, mba2010!$26K per year is not too shabby! Congrats! Is this going to effect your decision to go to Kellogg at all?
futuredukemba, how much did you get, if you don't mind sharing?
I haven't received a letter yet. Maybe I'll get it tonight, when I get home from work...
Current Student
Joined: 17 May 2007
Posts: 385
Followers: 3
Kudos [?]: 7 [0], given: 0
Re: Calling all Duke 2008 applicants... [#permalink] 01 Feb 2008, 15:17
I doubt I'll go to Duke, despite the generous offer. I've been pretty set on Kellogg for a while now.
Current Student
Joined: 25 Jun 2006
Posts: 110
Followers: 1
Kudos [?]: 6 [0], given: 0
Re: Calling all Duke 2008 applicants... [#permalink] 01 Feb 2008, 18:15
$18K per year nervousgmat wrote: mba2010 wrote: Got my scholarship letter today.$26k per year. What'd you guys get?
I'm surprised, since I was expecting something along the lines of $5k Whoa, mba2010!$26K per year is not too shabby! Congrats! Is this going to effect your decision to go to Kellogg at all?
futuredukemba, how much did you get, if you don't mind sharing?
I haven't received a letter yet. Maybe I'll get it tonight, when I get home from work...
VP
Joined: 06 Feb 2007
Posts: 1023
Followers: 20
Kudos [?]: 132 [0], given: 0
Re: Calling all Duke 2008 applicants... [#permalink] 01 Feb 2008, 18:57
I got a letter today: the offer is \$22K/year.
Decision, decisions...
GMAT Club Legend
Joined: 10 Apr 2007
Posts: 4320
Location: Back in Chicago, IL
Schools: Kellogg Alum: Class of 2010
Followers: 85
Kudos [?]: 722 [0], given: 5
Re: Calling all Duke 2008 applicants... [#permalink] 01 Feb 2008, 19:45
Nice job nervous...raking it in now. Definitely makes for tougher decisions. Still hoping we can sway you at DAK.
_________________
Kellogg Class of 2010...still active and willing to help. However, I do not do profile reviews, don't offer predictions on chances and am far to busy to review essays, so save the energy of writing me a PM seeking help for these. If I don't respond to a PM that is not one of the previously mentioned trash can destined messages, please don't take it personally I get so many messages I have a hard to responding to most. The more interesting, compelling, or humorous you message the more likely I am to respond.
GMAT Club Premium Membership - big benefits and savings
VP
Joined: 06 Feb 2007
Posts: 1023
Followers: 20
Kudos [?]: 132 [0], given: 0
Re: Calling all Duke 2008 applicants... [#permalink] 01 Feb 2008, 20:12
riverripper wrote:
Nice job nervous...raking it in now. Definitely makes for tougher decisions. Still hoping we can sway you at DAK.
Why can't Kellogg show me some love and give me a little something??? That would make my decision so much easier!
CEO
Joined: 17 May 2007
Posts: 2994
Followers: 59
Kudos [?]: 471 [0], given: 210
Re: Calling all Duke 2008 applicants... [#permalink] 01 Feb 2008, 20:26
Nervous tell us your secret !!!
Re: Calling all Duke 2008 applicants... [#permalink] 01 Feb 2008, 20:26
Go to page Previous 1 ... 11 12 13 14 15 16 17 ... 23 Next [ 441 posts ]
Similar topics Replies Last post
Similar
Topics:
4 Calling all Tepper applicants for 2008 186 11 Sep 2007, 14:28
14 Calling all Wharton 2008 applicants 1264 09 Aug 2007, 19:06
1 Calling all Yale 2008 applicants 392 03 Aug 2007, 20:58
2 Calling all Cornell 2008 applicants 855 03 Aug 2007, 17:59
6 Calling all TUCK 2008 applicants 563 17 Jul 2007, 19:40
Display posts from previous: Sort by
# Calling all Duke 2008 applicants...
Question banks Downloads My Bookmarks Reviews Important topics
Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2713248133659363, "perplexity": 16583.08783206239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928923.21/warc/CC-MAIN-20150521113208-00125-ip-10-180-206-219.ec2.internal.warc.gz"}
|
http://www.ck12.org/geometry/Applications-of-the-Pythagorean-Theorem/studyguide/Pythagorean-Theorem-Study-Guide/r1/
|
<meta http-equiv="refresh" content="1; url=/nojavascript/">
# Applications of the Pythagorean Theorem
%
Progress
Practice Applications of the Pythagorean Theorem
Progress
%
Pythagorean Theorem Study Guide
Student Contributed
This study guide is an overview of the Pythagorean theorem and its converse, Pythagorean triples, and proving the distance formula.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8822125196456909, "perplexity": 3775.248440580431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430459748987.54/warc/CC-MAIN-20150501055548-00056-ip-10-235-10-82.ec2.internal.warc.gz"}
|
http://www.koreascience.or.kr/article/JAKO201836256831796.page
|
# 화학적산소요구량을 이용한 하구해역의 해수중 유기물 기원 고찰
• Kim, Young-Sug (Marine Environment Research Division, National Institute of Fisheries Science) ;
• Koo, Jun-Ho (Marine Environment Research Division, National Institute of Fisheries Science) ;
• Kwon, Jung-No (Marine Environment Research Division, National Institute of Fisheries Science) ;
• Lee, Won-Chan (Marine Environment Research Division, National Institute of Fisheries Science)
• 김영숙 (국립수산과학원 어장환경과) ;
• 구준호 (국립수산과학원 어장환경과) ;
• 권정노 (국립수산과학원 어장환경과) ;
• 이원찬 (국립수산과학원 어장환경과)
• Received : 2018.07.23
• Accepted : 2018.10.26
• Published : 2018.10.31
#### Abstract
In this study, one studied the principal factors and water-quality components that determine the concentration of chemical oxygen demand (COD) in seawater in estuaries, such as the Han, Geum, Youngsan, Seomjin, and Nakdong rivers in Korea. The principal factors determining the concentration of COD in seawater indicated by the principal component analysis were salinity, exogenous origin and autochthonous resources based on chlorophyll-a. Moreover, organic matter in the submarine sediment layer also had a secondary effect. Regression slope assessed the contribution of water-quality components to determine the concentration of COD in the estuary. One found that the effect of salinity on the overall survey was significant. Moreover, the effect of chlorophyll-a was also appeared in April and August. In each estuary, the most significant contribution factor was chlorophyll-a in the Nakdong River and salinity in the Han and Yongsan rivers. The contribution of salinity and chlorophyll-a were found to be the largest in the Geum River. The salinity and chlorophyll-a in the Seomjin River showed a low contribution.
#### Acknowledgement
Supported by : 국립수산과학원
#### References
1. Aoki, S., Y. Fuse and E. Yamada(2004), Determinations of humic substances and other dissolved organic matter and their effects on the increase of COD in Lake Biwa, Analytical Science, Vol. 20, pp. 159-164. https://doi.org/10.2116/analsci.20.159
2. Baek, S. H.(2014), Distribution characteristics of chemical oxygen demand and Escheria coli on pollutant sources at Gwangyang Bay of South Sea in Korea, Journal of the Korea Academia-Industrial cooperation Society, Vol. 15, No. 5, pp. 3279-3285. https://doi.org/10.5762/KAIS.2014.15.5.3279
3. Boynton, W. R., J. H. Garber, R. Summers and W. M. Kemp(1995), Inputs, transformations, and transport of nitrogen and phosphorus in Chesapeake Bay and selected tributaries, Estuaries, Vol. 18, pp. 285-314. https://doi.org/10.2307/1352640
4. Carstensen, J., J. H. Andersen, B. G. Gustafsson and D. J. Conley(2014), Deoxygenation of the Baltic Sea during the last century, Proceedings of the National Academy of Sciences USA, Vol. 111, pp. 5628-5633. https://doi.org/10.1073/pnas.1323156111
5. Cho, K. J., M. Y. Choi, S. K. Kwak, S. H. Im, D. Y. Kim, J. G. Park and Y. E. Kim(1998), Eutrophication and seasonal variation of water quality in Masan-Jihae Bay, The Sea Journal of the Korean society of oceanography, Vol. 3, No. 4, pp. 193-202.
6. Drira, Z., S. Kmiha-Megdiche, H. Sahnoun, A. Hammami, N. Allouche, M. Tedetti and H. Ayadi(2016), Assessment of anthropogenic inputs in the surfacewaters of the southern coastal area of Sfax during spring (Tunisia, Southern Mediterranean Sea), Marine Pollution Bulletin, Vol. 104, pp. 355-363. https://doi.org/10.1016/j.marpolbul.2016.01.035
7. Guo, X., M. Dai, W. Zhai, W. J. Cai and B. Chen(2009), $CO_2$ flux and seasonal variability in a large subtropical estuarine system, the Pearl River Estuary, China, Journal of Geophysical Research, Vol. 114, G03013, doi:10.1029/2008JG000905. https://doi.org/10.1029/2008JG000905
8. Hong, S. J., W. C. Lee, S. P. Yoon, S. E. Park, Y. S. Cho, J. N. Kwon and D. M. Kim(2007), Reduction of autochthonous organics in Masan Bay using a simple box model, Journal of the Korean Society of Marine Environment & Safety, Vol. 13, No. 2, pp. 111-118.
9. Huang, X., L. Huang and W. Yue(2014), The characteristics of nutrients and eutrophication in the Pearl River estuary, South China, Marine Pollution Bulletin, Vol. 47, pp. 30-36.
10. Humborg, C., A. Ittekkot, A. Cociasu and B. V. Bodungen (1997), Effect of Danube River dam on Black Sea biogeochemistry and ecosystem structure, Nature, Vol. 386, pp. 385-388. https://doi.org/10.1038/386385a0
11. Jang, J. I, I. S. Han, K. T. Kim and K. T. Ra(2011), Characteristics of water quality in the Shihwa Lake and outer sea, Journal of the Korean Society of Marine Environment & Safety, Vol. 17, No. 2, pp. 105-121. https://doi.org/10.7837/kosomes.2011.17.2.105
12. Jeong, D. H., H. H. Shin, S. W. Jung and D. I. Lim(2013), Variations and characters of water quality during flood and dry seasons in the eastern coast of south sea, Korea, Korean Journal of Environmental Biology, Vol. 31, No. 1, pp. 19-36. https://doi.org/10.11626/KJEB.2013.31.1.019
13. Jeong, Y. H., Y. T. Kim, Y. Z. Chae, C. W. Rhee, K. R. Ko, S. Y. Kim, J. Y. Jeong and J. S. Yang(2005), Analysis of long-term monitoring data from the Geum river estuary, The Sea Journal of the Korean Society of Oceanography, Vol. 10, No. 3, pp. 139-144.
14. Jung, W. S., S. J. Hong, W. C. Lee, H. C. Kim, J. H. Kim and D. M. Kim(2016), Modeling for pollution contribution rate of land based load in Masan Bay, Journal of the Korean Society of Marine Environment & Safety, Vol. 22, No. 1, pp. 59-66. https://doi.org/10.7837/kosomes.2016.22.1.059
15. Kairesalo, T., L. Tuominen, H. Hartikainen and K. Rankinen (1995), The role of bacteria un the nutrient exchange between sediment and water in a flow-through system, Microbial Ecology, Vol. 29, pp. 129-144. https://doi.org/10.1007/BF00167160
16. Kim, J. G.(2006), The evaluation of water quality in coastal sea of Incheon using a multivariate analysis, Journal of the Environmental Sciences, Vol. 15, No. 11, pp. 1017-1025. https://doi.org/10.5322/JES.2006.15.11.1017
17. Kim, J. G. and H. S. Jang(2016), A Study on the inflowing pollution load and material budgets in Hampyeong Bay, Journal of the Korean Society of Marine Environment & Safety, Vol. 22, No. 1, pp. 1-10. https://doi.org/10.7837/kosomes.2016.22.1.001
18. Kim, J. K., G. I. Kwak and J. H. Jeong(2008), Three-Dimensional mixing characteristics in Seomjin river estuary, Journal of the Korean Society for Marine Environmental Engineering, Vol. 11, No. 3, pp. 164-174.
19. Kim, Y. T., Y. S. Choi, Y. S. Cho and Y. H. Choi and S. Jeon(2015), Characteristic distributions of nutrients and water quality parameters in the vicinity of Mokpo Harbor after freshwater inputs, Journal of the Korean Society of Marine Environment & Safety, Vol. 21, No. 6, pp. 617-636. https://doi.org/10.7837/kosomes.2015.21.6.617
20. Lee, K. S. and S. K. Jeon(2009), Material budgets in the Youngsan river estuary with simple box model, Journal of the Korean Society for Marine Environmental Engineering, Vol. 12, No. 4, pp. 248-254.
21. Lim, J. S., Y. W. Kim, J. H. Lee, T. J. Park and I. G. Byun(2015), Evaluation of correlation between chlorophyll-a and multiple parameters by multiple linear regression analysis, Journal of Korean Society Environmental Engineers, Vol. 37, No. 5, pp. 253-261. https://doi.org/10.4491/KSEE.2015.37.5.253
22. Maciejewska, A. and J. Pempkowiak(2014), DOC and POC in the water column of the southern Baltic Part I. Evaluation of factors influencing sources, distribution and concentration dynamics of organic matter, Oceanologia, Vol. 56, No. 3, pp. 523-548. https://doi.org/10.5697/oc.55-3.523
23. MOF(2013), Ministry of Oceans and Fsheries, Marine environment standard methods, pp. 47-85.
24. Morioka, T.(1980), Application of ecological dynamics for eutrophication control in Kobe Harbour area, Pro Water Technology, Vol. 12, pp. 445-458.
25. Park, H. S., C. K. Park, M. K. Song, K. H. Baek and S. K. Shin(2001), Evaluation of water quality characteristic using factor analysis in the Nakdong river, Journal of Korean Society on Water Quality, Vol. 17, No. 6, pp. 693-701.
26. Quan, W. M., X. Q. Shen and J. D. Han(2005), Analysis and assessment on eutrophication status and developing trend in Changjiang Estuary and adjacent sea, Marine Environmental Science, Vol. 24, No. 3, pp. 13-16. https://doi.org/10.3969/j.issn.1007-6336.2005.03.004
27. Savchuk, O. P.(2002), Nutrient biogeochemical cycles in the Gulf of Riga: scaling up field studies with a mathematical model, Journal of Marine Systems, Vol. 32, No. 4, pp. 253-280. https://doi.org/10.1016/S0924-7963(02)00039-8
28. Shen, H. T., Q. H. Huang and X. C. Liu(2000), Fluxes of the dissolved inorganic nitrogen and phosphorus through the key interfaces in the Changjiang Estuary, Estuarine Coasts, Vol. 33, No. 6, pp. 1420-1429.
29. Shim, K. H., Y. J. Lee, B. K. Jeong, Y. S. Shim and S. H. Kim(2013), Determination of the origin of particulate organic matter at the estuary of Yungsan river using stable isotope ratios (${\delta}13C,{\delta}15N$), Korean Journal of Ecology and Environment, Vol. 46, No. 2, pp. 175-184. https://doi.org/10.11614/KSL.2013.46.2.175
30. Shin, S. K., C. K. Park and K. O. Song(1995) Evaluation of autochthonous COD in the Nakdong estuary, Journal of the Korean Fisheries Society, Vol. 28, No. 3, pp. 263-269.
31. Simpson, J. H., P. B. Tett, M. L. Argote-Espinoza, A. Edwards, K. J. Jones and G. Savidge(1982), Mixing and phytoplankton growth around an island in a stratified sea, Continental Shelf Research, Vol. 1, pp. 15-31. https://doi.org/10.1016/0278-4343(82)90030-9
32. Singh, K. P., A. Malik, D. Mohan and S. Sinha(2004), Multivariate statistical techniques for the evaluation of spatial and temporal variations in water quality of Gomti River (India): a case study, Water Research, Vol. 38, pp. 3980-3992. https://doi.org/10.1016/j.watres.2004.06.011
33. Straskrabova, V., J. Komarkova and V. Vyhnalek(1993), Degradation of organic substance in reservoirs, Water Science and Technology, Vol. 28, No. 6, pp. 95-104. https://doi.org/10.2166/wst.1993.0133
34. Wang, H., M. Dai, J. Liu, S. J. Kao, C. Zhang, W. J. Cai, G. Wang, W. Qian, M. Zhao and M. Sun(2016), Eutrophication-Driven Hypoxia in the East China Sea off the Changjiang Estuary, Environmental Science and Technology, Vol. 50, pp. 2255-2263. https://doi.org/10.1021/acs.est.5b06211
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5570122599601746, "perplexity": 11713.640513257533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402131986.91/warc/CC-MAIN-20201001174918-20201001204918-00753.warc.gz"}
|
https://electronics.stackexchange.com/questions/367152/find-primary-current-in-3-phase-transformer
|
# Find primary current in 3 phase transformer
I am really confused with the above problem.
I get the logic of the solution
Since,the line voltage ratio is 2. So, current will be half. Thus 50.
But I am confused should we take +30 or -30.
Shouldn't it depend on the connection whether it is Yd1 or Yd11.
## -- EDIT --
I was trying to draw some phasors which I have attached below
Assuming IA winding and Iab winding are in phase
In this case taking Ib = 100 <0 gives me IA = 50 <30
• R loads give same I phasors as V phasors. Wye the confusion? mysite.du.edu/~jcalvert/tech/threeph.htm – Sunnyskyguy EE75 Apr 8 '18 at 5:08
• mysite.du.edu/~jcalvert/tech/wyedelta.gif – Sunnyskyguy EE75 Apr 8 '18 at 5:13
• @TonyStewartEEsince1975, just tell me will IA be in phase with Iab or 180 out of phase with Iab ?? – Nikhil Kashyap Apr 8 '18 at 13:34
• Look at the picture do you see 180 or 0 deg? No becuase WYE primary to Delta secondary is rotated -30 degrees – Sunnyskyguy EE75 Apr 8 '18 at 13:44
• but unlike analog clock and CW Notation for + phase , + angle is CCW rising above the zero axis at 3 o'clock – Sunnyskyguy EE75 Apr 8 '18 at 14:01
WYE to DELTA or DELTA to WYE results in positive or + 30 deg shift.
Yes it matters how it is wired. There is never any phase displacement or shift between two common windings, but there MAY be a phase displacement or shift between the line voltages of two transformers depending on how the windings are connected.
In this example B+ goes to A' or A'-B or C'-A
Yd11 with 30 deg lead. (secondary leads primary by 30 deg.) and primary WYE lags secondary DELTA
Why does math use CCW for positive angles and clocks rotate the opposite direction?
... because in math everyone uses Archimedes and Cartesian standards with right axis as + and vertical axis at top is +ve ( same in schematics for V+ to top)
Rotating a circle past any reference point in the orientation as you are told (CCW is positive) then Delta to WYE or WYE to DELTA if in same sequence given results in 30 rotation to left or CCW or POSITIVE 30 deg.
Positive angles in Trigonometry oppose direction of clock. It does not mean we go backward in time. The cicle rotation std is also show here as CCW for positive degrees when rotating the circle past any reference point.
See again for b to lag a the triangle must be rotated CCW for POSITIVE ANGLES with positive time. $\phi=\omega t$
After understanding the connection, now we will try to draw voltage and current phasors.
Since, the line voltage ratio is 2. So, we can conclude that the magnitude of primary line current will be 50. Now, we will try to get its phase using current phasor.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7153465151786804, "perplexity": 3083.2868564734154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330913.72/warc/CC-MAIN-20190826000512-20190826022512-00173.warc.gz"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.